var/home/core/zuul-output/0000755000175000017500000000000015116033600014520 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015116036301015465 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log0000644000000000000000002247026115116036272017704 0ustar rootrootDec 09 14:55:52 crc systemd[1]: Starting Kubernetes Kubelet... Dec 09 14:55:52 crc kubenswrapper[5107]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 09 14:55:52 crc kubenswrapper[5107]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 09 14:55:52 crc kubenswrapper[5107]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 09 14:55:52 crc kubenswrapper[5107]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 09 14:55:52 crc kubenswrapper[5107]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 09 14:55:52 crc kubenswrapper[5107]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.590879 5107 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.597779 5107 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.597826 5107 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.597831 5107 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.597836 5107 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.597840 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.597844 5107 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.597851 5107 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.597856 5107 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.597862 5107 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.597867 5107 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.597872 5107 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.597876 5107 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.597880 5107 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.597884 5107 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.597895 5107 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.597898 5107 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.597905 5107 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.597911 5107 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.597917 5107 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.597923 5107 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.597928 5107 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.597934 5107 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.597942 5107 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.597946 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.597950 5107 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.597954 5107 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.597958 5107 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.597962 5107 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.597965 5107 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.597976 5107 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.597980 5107 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.597984 5107 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.597988 5107 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.597994 5107 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.597999 5107 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598003 5107 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598007 5107 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598012 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598018 5107 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598022 5107 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598026 5107 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598033 5107 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598038 5107 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598041 5107 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598046 5107 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598050 5107 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598053 5107 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598058 5107 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598062 5107 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598066 5107 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598070 5107 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598074 5107 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598078 5107 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598082 5107 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598086 5107 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598090 5107 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598094 5107 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598099 5107 feature_gate.go:328] unrecognized feature gate: Example2 Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598104 5107 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598109 5107 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598116 5107 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598121 5107 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598126 5107 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598131 5107 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598134 5107 feature_gate.go:328] unrecognized feature gate: Example Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598138 5107 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598142 5107 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598146 5107 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598150 5107 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598154 5107 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598158 5107 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598162 5107 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598166 5107 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598170 5107 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598175 5107 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598179 5107 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598183 5107 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598187 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598192 5107 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598197 5107 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598200 5107 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598204 5107 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598208 5107 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598212 5107 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598216 5107 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598220 5107 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598901 5107 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598911 5107 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598916 5107 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598920 5107 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598924 5107 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598928 5107 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598933 5107 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598937 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598941 5107 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598945 5107 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598949 5107 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598953 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598957 5107 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598961 5107 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598965 5107 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598969 5107 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598973 5107 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598977 5107 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598981 5107 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598987 5107 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598991 5107 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598995 5107 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.598999 5107 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599003 5107 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599008 5107 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599012 5107 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599016 5107 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599020 5107 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599024 5107 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599028 5107 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599032 5107 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599036 5107 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599040 5107 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599044 5107 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599047 5107 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599052 5107 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599056 5107 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599060 5107 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599063 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599068 5107 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599071 5107 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599075 5107 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599079 5107 feature_gate.go:328] unrecognized feature gate: Example Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599083 5107 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599087 5107 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599091 5107 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599094 5107 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599098 5107 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599102 5107 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599105 5107 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599110 5107 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599115 5107 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599119 5107 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599123 5107 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599127 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599131 5107 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599134 5107 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599140 5107 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599143 5107 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599149 5107 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599153 5107 feature_gate.go:328] unrecognized feature gate: Example2 Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599157 5107 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599161 5107 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599166 5107 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599172 5107 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599176 5107 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599180 5107 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599184 5107 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599189 5107 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599192 5107 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599196 5107 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599200 5107 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599204 5107 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599208 5107 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599212 5107 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599216 5107 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599220 5107 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599225 5107 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599230 5107 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599234 5107 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599238 5107 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599242 5107 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599246 5107 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599251 5107 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599255 5107 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.599259 5107 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.599798 5107 flags.go:64] FLAG: --address="0.0.0.0" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.599816 5107 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.599824 5107 flags.go:64] FLAG: --anonymous-auth="true" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.599831 5107 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.599839 5107 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.599850 5107 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.599856 5107 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.599863 5107 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.599868 5107 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.599873 5107 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.599877 5107 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.599882 5107 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.599887 5107 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.599892 5107 flags.go:64] FLAG: --cgroup-root="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.599896 5107 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.599901 5107 flags.go:64] FLAG: --client-ca-file="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.599906 5107 flags.go:64] FLAG: --cloud-config="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.599911 5107 flags.go:64] FLAG: --cloud-provider="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.599915 5107 flags.go:64] FLAG: --cluster-dns="[]" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.599922 5107 flags.go:64] FLAG: --cluster-domain="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.599927 5107 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.599931 5107 flags.go:64] FLAG: --config-dir="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.599936 5107 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.599941 5107 flags.go:64] FLAG: --container-log-max-files="5" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.599955 5107 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.599959 5107 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.599965 5107 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.599970 5107 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.599975 5107 flags.go:64] FLAG: --contention-profiling="false" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.599982 5107 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.599987 5107 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.599993 5107 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.599998 5107 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600005 5107 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600010 5107 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600015 5107 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600020 5107 flags.go:64] FLAG: --enable-load-reader="false" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600028 5107 flags.go:64] FLAG: --enable-server="true" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600032 5107 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600038 5107 flags.go:64] FLAG: --event-burst="100" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600042 5107 flags.go:64] FLAG: --event-qps="50" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600046 5107 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600050 5107 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600054 5107 flags.go:64] FLAG: --eviction-hard="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600059 5107 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600063 5107 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600067 5107 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600070 5107 flags.go:64] FLAG: --eviction-soft="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600074 5107 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600078 5107 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600082 5107 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600086 5107 flags.go:64] FLAG: --experimental-mounter-path="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600090 5107 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600094 5107 flags.go:64] FLAG: --fail-swap-on="true" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600097 5107 flags.go:64] FLAG: --feature-gates="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600103 5107 flags.go:64] FLAG: --file-check-frequency="20s" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600107 5107 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600111 5107 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600115 5107 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600119 5107 flags.go:64] FLAG: --healthz-port="10248" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600122 5107 flags.go:64] FLAG: --help="false" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600126 5107 flags.go:64] FLAG: --hostname-override="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600130 5107 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600134 5107 flags.go:64] FLAG: --http-check-frequency="20s" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600139 5107 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600142 5107 flags.go:64] FLAG: --image-credential-provider-config="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600146 5107 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600150 5107 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600153 5107 flags.go:64] FLAG: --image-service-endpoint="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600160 5107 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600164 5107 flags.go:64] FLAG: --kube-api-burst="100" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600168 5107 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600171 5107 flags.go:64] FLAG: --kube-api-qps="50" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600175 5107 flags.go:64] FLAG: --kube-reserved="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600178 5107 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600182 5107 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600186 5107 flags.go:64] FLAG: --kubelet-cgroups="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600190 5107 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600194 5107 flags.go:64] FLAG: --lock-file="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600198 5107 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600202 5107 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600206 5107 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600213 5107 flags.go:64] FLAG: --log-json-split-stream="false" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600217 5107 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600221 5107 flags.go:64] FLAG: --log-text-split-stream="false" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600225 5107 flags.go:64] FLAG: --logging-format="text" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600229 5107 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600233 5107 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600237 5107 flags.go:64] FLAG: --manifest-url="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600241 5107 flags.go:64] FLAG: --manifest-url-header="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600247 5107 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600251 5107 flags.go:64] FLAG: --max-open-files="1000000" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600256 5107 flags.go:64] FLAG: --max-pods="110" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600261 5107 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600266 5107 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600270 5107 flags.go:64] FLAG: --memory-manager-policy="None" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600274 5107 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600278 5107 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600282 5107 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600286 5107 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600298 5107 flags.go:64] FLAG: --node-status-max-images="50" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600305 5107 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600309 5107 flags.go:64] FLAG: --oom-score-adj="-999" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600313 5107 flags.go:64] FLAG: --pod-cidr="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600317 5107 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600324 5107 flags.go:64] FLAG: --pod-manifest-path="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600327 5107 flags.go:64] FLAG: --pod-max-pids="-1" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600336 5107 flags.go:64] FLAG: --pods-per-core="0" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600340 5107 flags.go:64] FLAG: --port="10250" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600344 5107 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600360 5107 flags.go:64] FLAG: --provider-id="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600365 5107 flags.go:64] FLAG: --qos-reserved="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600369 5107 flags.go:64] FLAG: --read-only-port="10255" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600373 5107 flags.go:64] FLAG: --register-node="true" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600377 5107 flags.go:64] FLAG: --register-schedulable="true" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600381 5107 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600389 5107 flags.go:64] FLAG: --registry-burst="10" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600392 5107 flags.go:64] FLAG: --registry-qps="5" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600396 5107 flags.go:64] FLAG: --reserved-cpus="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600400 5107 flags.go:64] FLAG: --reserved-memory="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600405 5107 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600409 5107 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600413 5107 flags.go:64] FLAG: --rotate-certificates="false" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600416 5107 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600420 5107 flags.go:64] FLAG: --runonce="false" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600424 5107 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600428 5107 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600432 5107 flags.go:64] FLAG: --seccomp-default="false" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600436 5107 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600440 5107 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600444 5107 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600448 5107 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600452 5107 flags.go:64] FLAG: --storage-driver-password="root" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600459 5107 flags.go:64] FLAG: --storage-driver-secure="false" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600463 5107 flags.go:64] FLAG: --storage-driver-table="stats" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600468 5107 flags.go:64] FLAG: --storage-driver-user="root" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600474 5107 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600478 5107 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600483 5107 flags.go:64] FLAG: --system-cgroups="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600486 5107 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600492 5107 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600496 5107 flags.go:64] FLAG: --tls-cert-file="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600499 5107 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600505 5107 flags.go:64] FLAG: --tls-min-version="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600508 5107 flags.go:64] FLAG: --tls-private-key-file="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600512 5107 flags.go:64] FLAG: --topology-manager-policy="none" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600516 5107 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600520 5107 flags.go:64] FLAG: --topology-manager-scope="container" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600524 5107 flags.go:64] FLAG: --v="2" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600529 5107 flags.go:64] FLAG: --version="false" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600535 5107 flags.go:64] FLAG: --vmodule="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600541 5107 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.600545 5107 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600652 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600657 5107 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600661 5107 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600664 5107 feature_gate.go:328] unrecognized feature gate: Example Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600668 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600675 5107 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600679 5107 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600682 5107 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600685 5107 feature_gate.go:328] unrecognized feature gate: Example2 Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600689 5107 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600693 5107 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600696 5107 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600701 5107 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600704 5107 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600708 5107 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600713 5107 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600718 5107 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600722 5107 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600725 5107 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600729 5107 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600732 5107 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600736 5107 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600739 5107 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600743 5107 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600746 5107 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600750 5107 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600753 5107 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600757 5107 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600760 5107 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600763 5107 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600767 5107 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600770 5107 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600774 5107 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600777 5107 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600780 5107 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600784 5107 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600787 5107 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600792 5107 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600795 5107 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600799 5107 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600802 5107 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600806 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600809 5107 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600812 5107 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600816 5107 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600820 5107 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600824 5107 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600829 5107 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600833 5107 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600838 5107 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600842 5107 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600845 5107 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600849 5107 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600852 5107 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600855 5107 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600859 5107 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600863 5107 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600866 5107 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600870 5107 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600873 5107 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600876 5107 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600879 5107 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600883 5107 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600886 5107 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600890 5107 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600893 5107 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600896 5107 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600899 5107 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600903 5107 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600907 5107 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600911 5107 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600914 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600917 5107 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600920 5107 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600924 5107 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600927 5107 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600930 5107 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600935 5107 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600938 5107 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600941 5107 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600944 5107 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600948 5107 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600952 5107 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600956 5107 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600959 5107 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.600962 5107 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.601121 5107 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.612664 5107 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.612713 5107 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612776 5107 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612787 5107 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612791 5107 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612795 5107 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612799 5107 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612803 5107 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612808 5107 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612813 5107 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612818 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612823 5107 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612828 5107 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612833 5107 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612837 5107 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612841 5107 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612844 5107 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612848 5107 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612851 5107 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612854 5107 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612858 5107 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612862 5107 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612867 5107 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612872 5107 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612875 5107 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612879 5107 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612882 5107 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612886 5107 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612889 5107 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612893 5107 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612896 5107 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612899 5107 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612903 5107 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612908 5107 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612912 5107 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612915 5107 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612919 5107 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612922 5107 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612926 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612929 5107 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612933 5107 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612937 5107 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612941 5107 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612944 5107 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612948 5107 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612951 5107 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612955 5107 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612958 5107 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612961 5107 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612964 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612968 5107 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612971 5107 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612974 5107 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612977 5107 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612981 5107 feature_gate.go:328] unrecognized feature gate: Example2 Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612984 5107 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612988 5107 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612991 5107 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612994 5107 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.612997 5107 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613003 5107 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613006 5107 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613010 5107 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613021 5107 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613025 5107 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613028 5107 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613040 5107 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613046 5107 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613051 5107 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613056 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613060 5107 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613064 5107 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613067 5107 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613071 5107 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613075 5107 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613078 5107 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613081 5107 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613085 5107 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613088 5107 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613092 5107 feature_gate.go:328] unrecognized feature gate: Example Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613095 5107 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613098 5107 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613102 5107 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613106 5107 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613109 5107 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613112 5107 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613115 5107 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613119 5107 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.613127 5107 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613258 5107 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613264 5107 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613267 5107 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613272 5107 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613276 5107 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613279 5107 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613283 5107 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613286 5107 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613289 5107 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613293 5107 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613302 5107 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613305 5107 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613308 5107 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613311 5107 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613315 5107 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613318 5107 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613321 5107 feature_gate.go:328] unrecognized feature gate: Example Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613325 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613329 5107 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613336 5107 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613340 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613345 5107 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613361 5107 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613365 5107 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613369 5107 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613373 5107 feature_gate.go:328] unrecognized feature gate: Example2 Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613377 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613380 5107 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613384 5107 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613387 5107 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613390 5107 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613393 5107 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613397 5107 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613400 5107 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613403 5107 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613406 5107 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613410 5107 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613413 5107 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613416 5107 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613419 5107 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613422 5107 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613426 5107 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613429 5107 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613435 5107 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613438 5107 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613442 5107 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613445 5107 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613448 5107 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613451 5107 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613455 5107 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613460 5107 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613464 5107 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613467 5107 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613471 5107 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613474 5107 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613478 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613482 5107 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613485 5107 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613489 5107 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613492 5107 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613496 5107 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613500 5107 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613504 5107 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613507 5107 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613511 5107 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613515 5107 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613518 5107 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613521 5107 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613525 5107 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613529 5107 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613533 5107 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613538 5107 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613543 5107 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613549 5107 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613552 5107 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613557 5107 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613562 5107 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613567 5107 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613571 5107 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613575 5107 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613581 5107 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613585 5107 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613590 5107 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613594 5107 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613597 5107 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 09 14:55:52 crc kubenswrapper[5107]: W1209 14:55:52.613601 5107 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.613608 5107 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.614027 5107 server.go:962] "Client rotation is on, will bootstrap in background" Dec 09 14:55:52 crc kubenswrapper[5107]: E1209 14:55:52.617836 5107 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.620636 5107 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.620768 5107 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.621249 5107 server.go:1019] "Starting client certificate rotation" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.621419 5107 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.621517 5107 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.632805 5107 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.635003 5107 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 09 14:55:52 crc kubenswrapper[5107]: E1209 14:55:52.635321 5107 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.650913 5107 log.go:25] "Validated CRI v1 runtime API" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.681447 5107 log.go:25] "Validated CRI v1 image API" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.683392 5107 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.688524 5107 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2025-12-09-14-49-57-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.688560 5107 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:44 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.704387 5107 manager.go:217] Machine: {Timestamp:2025-12-09 14:55:52.701613405 +0000 UTC m=+0.425318314 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33649926144 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:084757af-33e8-4017-8563-50553d5c8b31 BootID:9b1559a0-2d18-46c4-a06d-382661d2a0c3 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824963072 Type:vfs Inodes:4107657 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729986048 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824963072 Type:vfs Inodes:1048576 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:44 Capacity:1073741824 Type:vfs Inodes:4107657 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:aa:7f:30 Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:aa:7f:30 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:14:62:cb Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:ec:47:ef Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:28:e1:df Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:76:c5:da Speed:-1 Mtu:1496} {Name:eth10 MacAddress:72:74:c2:f6:3e:64 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:f2:22:8c:e7:0a:45 Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649926144 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.704642 5107 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.704850 5107 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.705618 5107 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.705655 5107 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.705897 5107 topology_manager.go:138] "Creating topology manager with none policy" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.705909 5107 container_manager_linux.go:306] "Creating device plugin manager" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.705932 5107 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.707750 5107 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.708278 5107 state_mem.go:36] "Initialized new in-memory state store" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.708663 5107 server.go:1267] "Using root directory" path="/var/lib/kubelet" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.709182 5107 kubelet.go:491] "Attempting to sync node with API server" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.709201 5107 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.709221 5107 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.709240 5107 kubelet.go:397] "Adding apiserver pod source" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.709262 5107 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.711572 5107 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.711590 5107 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.712969 5107 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.712986 5107 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Dec 09 14:55:52 crc kubenswrapper[5107]: E1209 14:55:52.714560 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 09 14:55:52 crc kubenswrapper[5107]: E1209 14:55:52.714711 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.715284 5107 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.715710 5107 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.716504 5107 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.719808 5107 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.719922 5107 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.720001 5107 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.720061 5107 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.720199 5107 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.720832 5107 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.720857 5107 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.720868 5107 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.720889 5107 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.720909 5107 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.720922 5107 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.721085 5107 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.722071 5107 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.722086 5107 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.723082 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.734418 5107 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.734510 5107 server.go:1295] "Started kubelet" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.734646 5107 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.734899 5107 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.735098 5107 server_v1.go:47] "podresources" method="list" useActivePods=true Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.736136 5107 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.736608 5107 server.go:317] "Adding debug handlers to kubelet server" Dec 09 14:55:52 crc systemd[1]: Started Kubernetes Kubelet. Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.737106 5107 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.737400 5107 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 09 14:55:52 crc kubenswrapper[5107]: E1209 14:55:52.736585 5107 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.163:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187f93dc2885950a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:52.73445505 +0000 UTC m=+0.458159939,LastTimestamp:2025-12-09 14:55:52.73445505 +0000 UTC m=+0.458159939,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.738113 5107 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.738130 5107 volume_manager.go:295] "The desired_state_of_world populator starts" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.738156 5107 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 09 14:55:52 crc kubenswrapper[5107]: E1209 14:55:52.738199 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:55:52 crc kubenswrapper[5107]: E1209 14:55:52.738748 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="200ms" Dec 09 14:55:52 crc kubenswrapper[5107]: E1209 14:55:52.738893 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.742360 5107 factory.go:55] Registering systemd factory Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.742488 5107 factory.go:223] Registration of the systemd container factory successfully Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.743703 5107 factory.go:153] Registering CRI-O factory Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.743749 5107 factory.go:223] Registration of the crio container factory successfully Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.743900 5107 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.743943 5107 factory.go:103] Registering Raw factory Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.743965 5107 manager.go:1196] Started watching for new ooms in manager Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.745223 5107 manager.go:319] Starting recovery of all containers Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.788641 5107 manager.go:324] Recovery completed Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.793450 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.793528 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.793540 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.793549 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.793582 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.793595 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.793606 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.793616 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.793633 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.793669 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.793690 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.793703 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.793714 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.793723 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.793737 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.793747 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.793757 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.793797 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.793810 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.793840 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.793854 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.793869 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.793882 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.794081 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.794092 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.794121 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.794139 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.794155 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.794191 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.795432 5107 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.795464 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.795490 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.795526 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.795575 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.795589 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.795612 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.795624 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.795637 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.795650 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.795663 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.795675 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.795708 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.795721 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.795733 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.795745 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.795759 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.795772 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.795786 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.795800 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.795825 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.795839 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.795851 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.795863 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.795875 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.795887 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.795899 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.795910 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.795947 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.795960 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.795971 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.795981 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.795993 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796004 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796019 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796031 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796048 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796059 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796074 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796086 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796098 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796109 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796121 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796132 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796197 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796213 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796237 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796252 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796279 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796292 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796303 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796329 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796396 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796410 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796422 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796436 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796449 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796462 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796501 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796513 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796537 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796548 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796558 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796569 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796579 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796590 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796608 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796622 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796651 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796664 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796676 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796724 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796738 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796752 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796765 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796777 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796809 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796821 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796832 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796859 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796874 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796885 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796896 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796908 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796949 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796960 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796972 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796983 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.796996 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.797007 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.797017 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.797030 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.797065 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.797080 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.797091 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.797102 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.797122 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.797137 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.797152 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.797165 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.797193 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.797204 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.797225 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.797238 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.797249 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.797262 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.797273 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.797284 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.797300 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.797312 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.797473 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.797878 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.797900 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.797938 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.797952 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.797964 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.797978 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798009 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798022 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798036 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798051 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798073 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798085 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798097 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798109 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798121 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798133 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798147 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798158 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798188 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798201 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798216 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798228 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798240 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798284 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798298 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798327 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798368 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798408 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798435 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798452 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798467 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798481 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798528 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798564 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798592 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798606 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798621 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798635 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798648 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798663 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798675 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798702 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798728 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798741 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798753 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798768 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798779 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798793 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798806 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798818 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798845 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798856 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798869 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798880 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798892 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798906 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798918 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798976 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.798996 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.799017 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.799042 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.799055 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.799068 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.799080 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.799091 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.799107 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.799119 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.799132 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.799144 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.799157 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.799169 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.799183 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.799197 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.799225 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.799239 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.799251 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.799264 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.799281 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.799292 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.799304 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.799318 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.799335 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.799363 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.799376 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.799390 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.799439 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.799454 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.799466 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.799479 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.799494 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.799507 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.799518 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.799530 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.799541 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.799555 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.799566 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.799579 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.799590 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.799602 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.799614 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.799625 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.799635 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.799647 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.799659 5107 reconstruct.go:97] "Volume reconstruction finished" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.799670 5107 reconciler.go:26] "Reconciler: start to sync state" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.810581 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.813891 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.813936 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.813951 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.814388 5107 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.816537 5107 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.816587 5107 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.816634 5107 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.816648 5107 kubelet.go:2451] "Starting kubelet main sync loop" Dec 09 14:55:52 crc kubenswrapper[5107]: E1209 14:55:52.816802 5107 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 09 14:55:52 crc kubenswrapper[5107]: E1209 14:55:52.817815 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.821933 5107 cpu_manager.go:222] "Starting CPU manager" policy="none" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.821966 5107 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.822011 5107 state_mem.go:36] "Initialized new in-memory state store" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.827741 5107 policy_none.go:49] "None policy: Start" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.827797 5107 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.827816 5107 state_mem.go:35] "Initializing new in-memory state store" Dec 09 14:55:52 crc kubenswrapper[5107]: E1209 14:55:52.839158 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.887742 5107 manager.go:341] "Starting Device Plugin manager" Dec 09 14:55:52 crc kubenswrapper[5107]: E1209 14:55:52.888120 5107 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.888180 5107 server.go:85] "Starting device plugin registration server" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.888756 5107 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.888776 5107 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.889097 5107 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.889173 5107 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.889179 5107 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 09 14:55:52 crc kubenswrapper[5107]: E1209 14:55:52.892866 5107 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Dec 09 14:55:52 crc kubenswrapper[5107]: E1209 14:55:52.892966 5107 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.916890 5107 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.917134 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.917930 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.917998 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.918018 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.918982 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.919222 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.919330 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.919704 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.919732 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.919742 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.919924 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.919974 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.919988 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.920337 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.920567 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.920633 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.920805 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.920844 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.920855 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.921192 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.921227 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.921266 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.921639 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.921733 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.921769 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.922229 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.922257 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.922272 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.922298 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.922335 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.922357 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.922903 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.923080 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.923126 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.923900 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.923931 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.923955 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.923968 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.924015 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.924029 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.925372 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.925424 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.926208 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.926242 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.926253 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:55:52 crc kubenswrapper[5107]: E1209 14:55:52.940866 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="400ms" Dec 09 14:55:52 crc kubenswrapper[5107]: E1209 14:55:52.955599 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:55:52 crc kubenswrapper[5107]: E1209 14:55:52.970837 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:55:52 crc kubenswrapper[5107]: E1209 14:55:52.978850 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.988927 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.990097 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.990156 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.990172 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:55:52 crc kubenswrapper[5107]: I1209 14:55:52.990202 5107 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 09 14:55:52 crc kubenswrapper[5107]: E1209 14:55:52.990870 5107 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.163:6443: connect: connection refused" node="crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.002999 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.003071 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: E1209 14:55:53.003017 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.003201 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.003410 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.003455 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.003485 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.003508 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.003531 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.003558 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.003734 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.003804 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.003838 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.003870 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.003899 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.003925 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.003953 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.003982 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.004091 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.004119 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.004164 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.004209 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.004242 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.004263 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.004286 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.004447 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.004479 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.004619 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.004726 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.004718 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.006666 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: E1209 14:55:53.008651 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.106008 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.106059 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.106082 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.106096 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.106117 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.106137 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.106154 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.106169 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.106184 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.106201 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.106215 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.106234 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.106249 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.106265 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.106281 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.106297 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.106759 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.106803 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.106843 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.106798 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.106848 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.106905 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.106908 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.106889 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.106779 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.106939 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.106965 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.107032 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.106983 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.106992 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.106987 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.106994 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.191304 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.192859 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.192949 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.192965 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.193010 5107 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 09 14:55:53 crc kubenswrapper[5107]: E1209 14:55:53.193798 5107 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.163:6443: connect: connection refused" node="crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.256670 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.271875 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.279428 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: W1209 14:55:53.290753 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/crio-79587064dd27baf98b85d4a9c34dce357f5be6d98e31e83816cfd37abf4cb61c WatchSource:0}: Error finding container 79587064dd27baf98b85d4a9c34dce357f5be6d98e31e83816cfd37abf4cb61c: Status 404 returned error can't find the container with id 79587064dd27baf98b85d4a9c34dce357f5be6d98e31e83816cfd37abf4cb61c Dec 09 14:55:53 crc kubenswrapper[5107]: W1209 14:55:53.295362 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c5c5b4bed930554494851fe3cb2b2a.slice/crio-d4c15d1cbf95016cd492fdd8e85dddfecfccddc247194e31316ade94eccb91f5 WatchSource:0}: Error finding container d4c15d1cbf95016cd492fdd8e85dddfecfccddc247194e31316ade94eccb91f5: Status 404 returned error can't find the container with id d4c15d1cbf95016cd492fdd8e85dddfecfccddc247194e31316ade94eccb91f5 Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.295658 5107 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 14:55:53 crc kubenswrapper[5107]: W1209 14:55:53.302864 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-a44e7244ac01fa897b9b68c77a845949a14a7ed00a2597efbdf118b371f046bf WatchSource:0}: Error finding container a44e7244ac01fa897b9b68c77a845949a14a7ed00a2597efbdf118b371f046bf: Status 404 returned error can't find the container with id a44e7244ac01fa897b9b68c77a845949a14a7ed00a2597efbdf118b371f046bf Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.303734 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.309538 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:55:53 crc kubenswrapper[5107]: E1209 14:55:53.341952 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="800ms" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.594265 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.596231 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.596288 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.596301 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.596338 5107 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 09 14:55:53 crc kubenswrapper[5107]: E1209 14:55:53.597056 5107 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.163:6443: connect: connection refused" node="crc" Dec 09 14:55:53 crc kubenswrapper[5107]: E1209 14:55:53.682695 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 09 14:55:53 crc kubenswrapper[5107]: E1209 14:55:53.720729 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.724555 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Dec 09 14:55:53 crc kubenswrapper[5107]: E1209 14:55:53.763085 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.826889 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"5da809204ffb677ab26ac08d3b8691cf4de0639661a556cec289f6a56e57df5f"} Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.828006 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"a44e7244ac01fa897b9b68c77a845949a14a7ed00a2597efbdf118b371f046bf"} Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.828980 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"d4c15d1cbf95016cd492fdd8e85dddfecfccddc247194e31316ade94eccb91f5"} Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.830742 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"79587064dd27baf98b85d4a9c34dce357f5be6d98e31e83816cfd37abf4cb61c"} Dec 09 14:55:53 crc kubenswrapper[5107]: I1209 14:55:53.831496 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"0ef2e534f0832c287b66ff9f8034509b08287eb2d07a96d077966be2aa6f1f9a"} Dec 09 14:55:54 crc kubenswrapper[5107]: E1209 14:55:54.142703 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="1.6s" Dec 09 14:55:54 crc kubenswrapper[5107]: E1209 14:55:54.199956 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 09 14:55:54 crc kubenswrapper[5107]: I1209 14:55:54.397760 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:55:54 crc kubenswrapper[5107]: I1209 14:55:54.398834 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:55:54 crc kubenswrapper[5107]: I1209 14:55:54.398878 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:55:54 crc kubenswrapper[5107]: I1209 14:55:54.398893 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:55:54 crc kubenswrapper[5107]: I1209 14:55:54.398921 5107 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 09 14:55:54 crc kubenswrapper[5107]: E1209 14:55:54.399320 5107 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.163:6443: connect: connection refused" node="crc" Dec 09 14:55:54 crc kubenswrapper[5107]: I1209 14:55:54.644095 5107 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 09 14:55:54 crc kubenswrapper[5107]: E1209 14:55:54.645557 5107 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 09 14:55:54 crc kubenswrapper[5107]: I1209 14:55:54.724252 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Dec 09 14:55:54 crc kubenswrapper[5107]: I1209 14:55:54.838278 5107 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="1acf3dcf34f39f83e79fea39ae6c0addc25edf7d233ff3106e740d1c52c1a5a5" exitCode=0 Dec 09 14:55:54 crc kubenswrapper[5107]: I1209 14:55:54.838320 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"1acf3dcf34f39f83e79fea39ae6c0addc25edf7d233ff3106e740d1c52c1a5a5"} Dec 09 14:55:54 crc kubenswrapper[5107]: I1209 14:55:54.838672 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:55:54 crc kubenswrapper[5107]: I1209 14:55:54.839710 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:55:54 crc kubenswrapper[5107]: I1209 14:55:54.839781 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:55:54 crc kubenswrapper[5107]: I1209 14:55:54.839804 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:55:54 crc kubenswrapper[5107]: E1209 14:55:54.840200 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:55:54 crc kubenswrapper[5107]: I1209 14:55:54.845944 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"3b741aedb777807194ba434397b4d2a6063c1648282792a0f733d3b340b26774"} Dec 09 14:55:54 crc kubenswrapper[5107]: I1209 14:55:54.845979 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"5da0b9bfefb1775c7984b5581acb0ff5ac968a5aa3abacc68f3a3cad90bed680"} Dec 09 14:55:54 crc kubenswrapper[5107]: I1209 14:55:54.845995 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"7d7820e8e52e22f661c6cb580380ffd7f90da43e5aa390c88f1d73ea9a5afe59"} Dec 09 14:55:54 crc kubenswrapper[5107]: I1209 14:55:54.846020 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"770b2ea4f5cd64cd90ef891f9e2a1f60e78f0b284926af871933d7ca8898b9ca"} Dec 09 14:55:54 crc kubenswrapper[5107]: I1209 14:55:54.846197 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:55:54 crc kubenswrapper[5107]: I1209 14:55:54.848524 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:55:54 crc kubenswrapper[5107]: I1209 14:55:54.848561 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:55:54 crc kubenswrapper[5107]: I1209 14:55:54.848573 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:55:54 crc kubenswrapper[5107]: E1209 14:55:54.848847 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:55:54 crc kubenswrapper[5107]: I1209 14:55:54.851439 5107 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="68a813d217d22a3cbcc2432609817ec96bf285d26990afb1a4552cae8b0ffd2b" exitCode=0 Dec 09 14:55:54 crc kubenswrapper[5107]: I1209 14:55:54.851525 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"68a813d217d22a3cbcc2432609817ec96bf285d26990afb1a4552cae8b0ffd2b"} Dec 09 14:55:54 crc kubenswrapper[5107]: I1209 14:55:54.851771 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:55:54 crc kubenswrapper[5107]: I1209 14:55:54.852760 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:55:54 crc kubenswrapper[5107]: I1209 14:55:54.852792 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:55:54 crc kubenswrapper[5107]: I1209 14:55:54.852804 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:55:54 crc kubenswrapper[5107]: E1209 14:55:54.853094 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:55:54 crc kubenswrapper[5107]: I1209 14:55:54.854461 5107 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="31b13cf5f094ee3d16d72628b2be8256450a040cbf2aba08914cc67b41dd22fb" exitCode=0 Dec 09 14:55:54 crc kubenswrapper[5107]: I1209 14:55:54.854516 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"31b13cf5f094ee3d16d72628b2be8256450a040cbf2aba08914cc67b41dd22fb"} Dec 09 14:55:54 crc kubenswrapper[5107]: I1209 14:55:54.854644 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:55:54 crc kubenswrapper[5107]: I1209 14:55:54.855436 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:55:54 crc kubenswrapper[5107]: I1209 14:55:54.855557 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:55:54 crc kubenswrapper[5107]: I1209 14:55:54.855607 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:55:54 crc kubenswrapper[5107]: I1209 14:55:54.855619 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:55:54 crc kubenswrapper[5107]: E1209 14:55:54.855950 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:55:54 crc kubenswrapper[5107]: I1209 14:55:54.856048 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:55:54 crc kubenswrapper[5107]: I1209 14:55:54.856068 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:55:54 crc kubenswrapper[5107]: I1209 14:55:54.856078 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:55:54 crc kubenswrapper[5107]: I1209 14:55:54.857476 5107 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="082babd1585ebe1400244484d45e6ad956bbf945dd0d12b138be216b4fba3b8f" exitCode=0 Dec 09 14:55:54 crc kubenswrapper[5107]: I1209 14:55:54.857518 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"082babd1585ebe1400244484d45e6ad956bbf945dd0d12b138be216b4fba3b8f"} Dec 09 14:55:54 crc kubenswrapper[5107]: I1209 14:55:54.857812 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:55:54 crc kubenswrapper[5107]: I1209 14:55:54.858676 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:55:54 crc kubenswrapper[5107]: I1209 14:55:54.858709 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:55:54 crc kubenswrapper[5107]: I1209 14:55:54.858720 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:55:54 crc kubenswrapper[5107]: E1209 14:55:54.858922 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:55:55 crc kubenswrapper[5107]: I1209 14:55:55.725460 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Dec 09 14:55:55 crc kubenswrapper[5107]: E1209 14:55:55.744415 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="3.2s" Dec 09 14:55:55 crc kubenswrapper[5107]: E1209 14:55:55.841093 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 09 14:55:55 crc kubenswrapper[5107]: I1209 14:55:55.863747 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"169d737323fa61136fa0b652a15316035559871415b44a78987aa3ba6b800da0"} Dec 09 14:55:55 crc kubenswrapper[5107]: I1209 14:55:55.863806 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"8ff8ee8fa9bdc13d78b24a7ed20b77f712c6d70019508ed464dea1e799dfc607"} Dec 09 14:55:55 crc kubenswrapper[5107]: I1209 14:55:55.863816 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"a7a842582313d50b3a1310b8e3656dc938751f1b63c38bd4eef8a2b6fae1c928"} Dec 09 14:55:55 crc kubenswrapper[5107]: I1209 14:55:55.865239 5107 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="3766645d2e3d50636dc90c9b7b1e6a272a668ddd8cd9b12ec054f6c6ac82bf52" exitCode=0 Dec 09 14:55:55 crc kubenswrapper[5107]: I1209 14:55:55.865348 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"3766645d2e3d50636dc90c9b7b1e6a272a668ddd8cd9b12ec054f6c6ac82bf52"} Dec 09 14:55:55 crc kubenswrapper[5107]: I1209 14:55:55.865541 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:55:55 crc kubenswrapper[5107]: I1209 14:55:55.866420 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:55:55 crc kubenswrapper[5107]: I1209 14:55:55.866469 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:55:55 crc kubenswrapper[5107]: I1209 14:55:55.866483 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:55:55 crc kubenswrapper[5107]: E1209 14:55:55.866746 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:55:55 crc kubenswrapper[5107]: I1209 14:55:55.869970 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:55:55 crc kubenswrapper[5107]: I1209 14:55:55.869986 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"90d46289c8ca432b46932951c07a149bbafcc7176ad9849b227f5651b06a547e"} Dec 09 14:55:55 crc kubenswrapper[5107]: I1209 14:55:55.870745 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:55:55 crc kubenswrapper[5107]: I1209 14:55:55.870779 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:55:55 crc kubenswrapper[5107]: I1209 14:55:55.870791 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:55:55 crc kubenswrapper[5107]: E1209 14:55:55.871196 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:55:55 crc kubenswrapper[5107]: I1209 14:55:55.874005 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"9deb047f092d62f6fbf741f0935ccfd2452b426c60e8583ed86fb57f9cd9c940"} Dec 09 14:55:55 crc kubenswrapper[5107]: I1209 14:55:55.874056 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:55:55 crc kubenswrapper[5107]: I1209 14:55:55.874071 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"b2ee19d9b0c3ece70ee343d7e6d4270979d47c693eb4259e6765fa9cad9e1da6"} Dec 09 14:55:55 crc kubenswrapper[5107]: I1209 14:55:55.874085 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"904e724482192717fecf2153d5eb92a8c15cff7f211c05c086e0078ea27b2c37"} Dec 09 14:55:55 crc kubenswrapper[5107]: I1209 14:55:55.874213 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:55:55 crc kubenswrapper[5107]: I1209 14:55:55.874608 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:55:55 crc kubenswrapper[5107]: I1209 14:55:55.874644 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:55:55 crc kubenswrapper[5107]: I1209 14:55:55.874657 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:55:55 crc kubenswrapper[5107]: E1209 14:55:55.874862 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:55:55 crc kubenswrapper[5107]: I1209 14:55:55.875500 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:55:55 crc kubenswrapper[5107]: I1209 14:55:55.875555 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:55:55 crc kubenswrapper[5107]: I1209 14:55:55.875569 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:55:55 crc kubenswrapper[5107]: E1209 14:55:55.875956 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:55:56 crc kubenswrapper[5107]: I1209 14:55:55.999655 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:55:56 crc kubenswrapper[5107]: I1209 14:55:56.003206 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:55:56 crc kubenswrapper[5107]: I1209 14:55:56.003251 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:55:56 crc kubenswrapper[5107]: I1209 14:55:56.003265 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:55:56 crc kubenswrapper[5107]: I1209 14:55:56.003291 5107 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 09 14:55:56 crc kubenswrapper[5107]: E1209 14:55:56.003852 5107 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.163:6443: connect: connection refused" node="crc" Dec 09 14:55:56 crc kubenswrapper[5107]: I1209 14:55:56.882249 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"407c367cde40fc33fe02dd83c4e64894c2ed320f862d49d4687db4ceb0a006de"} Dec 09 14:55:56 crc kubenswrapper[5107]: I1209 14:55:56.882334 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"bdd7c59df86281d094a2a2e4476c5b9b6871fbd53bdb2c63b08a1822f3692aed"} Dec 09 14:55:56 crc kubenswrapper[5107]: I1209 14:55:56.883245 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:55:56 crc kubenswrapper[5107]: I1209 14:55:56.884234 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:55:56 crc kubenswrapper[5107]: I1209 14:55:56.884271 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:55:56 crc kubenswrapper[5107]: I1209 14:55:56.884285 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:55:56 crc kubenswrapper[5107]: E1209 14:55:56.884570 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:55:56 crc kubenswrapper[5107]: I1209 14:55:56.885110 5107 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="5cc408bdf5a7d3b3392428cdf3c7bcf13053a1c50c74b5f64015cbb071affce0" exitCode=0 Dec 09 14:55:56 crc kubenswrapper[5107]: I1209 14:55:56.885177 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"5cc408bdf5a7d3b3392428cdf3c7bcf13053a1c50c74b5f64015cbb071affce0"} Dec 09 14:55:56 crc kubenswrapper[5107]: I1209 14:55:56.885290 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:55:56 crc kubenswrapper[5107]: I1209 14:55:56.885522 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:55:56 crc kubenswrapper[5107]: I1209 14:55:56.885577 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:55:56 crc kubenswrapper[5107]: I1209 14:55:56.885844 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:55:56 crc kubenswrapper[5107]: I1209 14:55:56.886308 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:55:56 crc kubenswrapper[5107]: I1209 14:55:56.886367 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:55:56 crc kubenswrapper[5107]: I1209 14:55:56.886382 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:55:56 crc kubenswrapper[5107]: I1209 14:55:56.886411 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:55:56 crc kubenswrapper[5107]: I1209 14:55:56.886454 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:55:56 crc kubenswrapper[5107]: I1209 14:55:56.886470 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:55:56 crc kubenswrapper[5107]: E1209 14:55:56.886715 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:55:56 crc kubenswrapper[5107]: E1209 14:55:56.886973 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:55:56 crc kubenswrapper[5107]: I1209 14:55:56.888029 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:55:56 crc kubenswrapper[5107]: I1209 14:55:56.888067 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:55:56 crc kubenswrapper[5107]: I1209 14:55:56.888085 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:55:56 crc kubenswrapper[5107]: E1209 14:55:56.888297 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:55:57 crc kubenswrapper[5107]: I1209 14:55:57.262858 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:55:57 crc kubenswrapper[5107]: I1209 14:55:57.535773 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:55:57 crc kubenswrapper[5107]: I1209 14:55:57.536047 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:55:57 crc kubenswrapper[5107]: I1209 14:55:57.537015 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:55:57 crc kubenswrapper[5107]: I1209 14:55:57.537062 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:55:57 crc kubenswrapper[5107]: I1209 14:55:57.537075 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:55:57 crc kubenswrapper[5107]: E1209 14:55:57.537509 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:55:57 crc kubenswrapper[5107]: I1209 14:55:57.892822 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"bf0a296ed891533100528ac09d9e9b54984dcbd0b82dc3b71f1fdbb6a16a436e"} Dec 09 14:55:57 crc kubenswrapper[5107]: I1209 14:55:57.892903 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"065d21f3d7d7ddf58cbad5139235697b04e42f29e61ced7e1efba1de3555ae53"} Dec 09 14:55:57 crc kubenswrapper[5107]: I1209 14:55:57.892919 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"64d96edfb4f9b0ef8edd081ac1fa1a187a546788c54bb2dda482d17bcc17d37a"} Dec 09 14:55:57 crc kubenswrapper[5107]: I1209 14:55:57.892932 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"4459987e648ce73b2166ed2c4bda9dcb074c0f7ed1087788e224ae7445163de2"} Dec 09 14:55:57 crc kubenswrapper[5107]: I1209 14:55:57.892963 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:55:57 crc kubenswrapper[5107]: I1209 14:55:57.892996 5107 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 14:55:57 crc kubenswrapper[5107]: I1209 14:55:57.893030 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:55:57 crc kubenswrapper[5107]: I1209 14:55:57.893607 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:55:57 crc kubenswrapper[5107]: I1209 14:55:57.893639 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:55:57 crc kubenswrapper[5107]: I1209 14:55:57.893648 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:55:57 crc kubenswrapper[5107]: I1209 14:55:57.893653 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:55:57 crc kubenswrapper[5107]: I1209 14:55:57.893720 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:55:57 crc kubenswrapper[5107]: I1209 14:55:57.893739 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:55:57 crc kubenswrapper[5107]: E1209 14:55:57.893964 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:55:57 crc kubenswrapper[5107]: E1209 14:55:57.894499 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:55:58 crc kubenswrapper[5107]: I1209 14:55:58.534154 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:55:58 crc kubenswrapper[5107]: I1209 14:55:58.709592 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:55:58 crc kubenswrapper[5107]: I1209 14:55:58.789141 5107 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 09 14:55:58 crc kubenswrapper[5107]: I1209 14:55:58.901701 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"4ea5d580221066f1f04f75db2bc68afd26f37ae4331f25097f74f72a526bb4fe"} Dec 09 14:55:58 crc kubenswrapper[5107]: I1209 14:55:58.901933 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:55:58 crc kubenswrapper[5107]: I1209 14:55:58.902020 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:55:58 crc kubenswrapper[5107]: I1209 14:55:58.902766 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:55:58 crc kubenswrapper[5107]: I1209 14:55:58.902838 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:55:58 crc kubenswrapper[5107]: I1209 14:55:58.902860 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:55:58 crc kubenswrapper[5107]: E1209 14:55:58.903168 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:55:58 crc kubenswrapper[5107]: I1209 14:55:58.903285 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:55:58 crc kubenswrapper[5107]: I1209 14:55:58.903375 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:55:58 crc kubenswrapper[5107]: I1209 14:55:58.903396 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:55:58 crc kubenswrapper[5107]: E1209 14:55:58.903735 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:55:59 crc kubenswrapper[5107]: I1209 14:55:59.204304 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:55:59 crc kubenswrapper[5107]: I1209 14:55:59.205473 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:55:59 crc kubenswrapper[5107]: I1209 14:55:59.205528 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:55:59 crc kubenswrapper[5107]: I1209 14:55:59.205542 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:55:59 crc kubenswrapper[5107]: I1209 14:55:59.205575 5107 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 09 14:55:59 crc kubenswrapper[5107]: I1209 14:55:59.905572 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:55:59 crc kubenswrapper[5107]: I1209 14:55:59.905609 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:55:59 crc kubenswrapper[5107]: I1209 14:55:59.906700 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:55:59 crc kubenswrapper[5107]: I1209 14:55:59.906756 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:55:59 crc kubenswrapper[5107]: I1209 14:55:59.906773 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:55:59 crc kubenswrapper[5107]: E1209 14:55:59.907221 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:55:59 crc kubenswrapper[5107]: I1209 14:55:59.907768 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:55:59 crc kubenswrapper[5107]: I1209 14:55:59.907822 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:55:59 crc kubenswrapper[5107]: I1209 14:55:59.907838 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:55:59 crc kubenswrapper[5107]: E1209 14:55:59.908372 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:56:00 crc kubenswrapper[5107]: I1209 14:56:00.535920 5107 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": context deadline exceeded" start-of-body= Dec 09 14:56:00 crc kubenswrapper[5107]: I1209 14:56:00.536067 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded" Dec 09 14:56:00 crc kubenswrapper[5107]: I1209 14:56:00.581058 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:56:00 crc kubenswrapper[5107]: I1209 14:56:00.581348 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:56:00 crc kubenswrapper[5107]: I1209 14:56:00.582566 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:56:00 crc kubenswrapper[5107]: I1209 14:56:00.582608 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:56:00 crc kubenswrapper[5107]: I1209 14:56:00.582622 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:56:00 crc kubenswrapper[5107]: E1209 14:56:00.582967 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:56:00 crc kubenswrapper[5107]: I1209 14:56:00.586590 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:56:00 crc kubenswrapper[5107]: I1209 14:56:00.637385 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Dec 09 14:56:00 crc kubenswrapper[5107]: I1209 14:56:00.915562 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:56:00 crc kubenswrapper[5107]: I1209 14:56:00.915648 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:56:00 crc kubenswrapper[5107]: I1209 14:56:00.915702 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:56:00 crc kubenswrapper[5107]: I1209 14:56:00.916780 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:56:00 crc kubenswrapper[5107]: I1209 14:56:00.916833 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:56:00 crc kubenswrapper[5107]: I1209 14:56:00.916843 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:56:00 crc kubenswrapper[5107]: I1209 14:56:00.917424 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:56:00 crc kubenswrapper[5107]: E1209 14:56:00.917446 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:56:00 crc kubenswrapper[5107]: I1209 14:56:00.917488 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:56:00 crc kubenswrapper[5107]: I1209 14:56:00.917509 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:56:00 crc kubenswrapper[5107]: E1209 14:56:00.918051 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:56:01 crc kubenswrapper[5107]: I1209 14:56:01.918484 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:56:01 crc kubenswrapper[5107]: I1209 14:56:01.919504 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:56:01 crc kubenswrapper[5107]: I1209 14:56:01.919582 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:56:01 crc kubenswrapper[5107]: I1209 14:56:01.919597 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:56:01 crc kubenswrapper[5107]: E1209 14:56:01.919939 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:56:02 crc kubenswrapper[5107]: E1209 14:56:02.893217 5107 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 09 14:56:03 crc kubenswrapper[5107]: I1209 14:56:03.344763 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:56:03 crc kubenswrapper[5107]: I1209 14:56:03.345055 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:56:03 crc kubenswrapper[5107]: I1209 14:56:03.346321 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:56:03 crc kubenswrapper[5107]: I1209 14:56:03.346402 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:56:03 crc kubenswrapper[5107]: I1209 14:56:03.346417 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:56:03 crc kubenswrapper[5107]: E1209 14:56:03.346844 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:56:03 crc kubenswrapper[5107]: I1209 14:56:03.351287 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:56:03 crc kubenswrapper[5107]: I1209 14:56:03.923899 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:56:03 crc kubenswrapper[5107]: I1209 14:56:03.924725 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:56:03 crc kubenswrapper[5107]: I1209 14:56:03.924785 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:56:03 crc kubenswrapper[5107]: I1209 14:56:03.924796 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:56:03 crc kubenswrapper[5107]: E1209 14:56:03.925260 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:56:06 crc kubenswrapper[5107]: I1209 14:56:06.512991 5107 trace.go:236] Trace[1726238135]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Dec-2025 14:55:56.510) (total time: 10002ms): Dec 09 14:56:06 crc kubenswrapper[5107]: Trace[1726238135]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (14:56:06.512) Dec 09 14:56:06 crc kubenswrapper[5107]: Trace[1726238135]: [10.002113632s] [10.002113632s] END Dec 09 14:56:06 crc kubenswrapper[5107]: E1209 14:56:06.513065 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 09 14:56:06 crc kubenswrapper[5107]: I1209 14:56:06.724973 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 09 14:56:06 crc kubenswrapper[5107]: I1209 14:56:06.860167 5107 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 09 14:56:06 crc kubenswrapper[5107]: I1209 14:56:06.860264 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 09 14:56:06 crc kubenswrapper[5107]: I1209 14:56:06.865833 5107 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 09 14:56:06 crc kubenswrapper[5107]: I1209 14:56:06.865928 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 09 14:56:08 crc kubenswrapper[5107]: I1209 14:56:08.302062 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 09 14:56:08 crc kubenswrapper[5107]: I1209 14:56:08.302314 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:56:08 crc kubenswrapper[5107]: I1209 14:56:08.303766 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:56:08 crc kubenswrapper[5107]: I1209 14:56:08.303856 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:56:08 crc kubenswrapper[5107]: I1209 14:56:08.303873 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:56:08 crc kubenswrapper[5107]: E1209 14:56:08.304875 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:56:08 crc kubenswrapper[5107]: I1209 14:56:08.343211 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 09 14:56:08 crc kubenswrapper[5107]: I1209 14:56:08.542268 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:56:08 crc kubenswrapper[5107]: I1209 14:56:08.542604 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:56:08 crc kubenswrapper[5107]: I1209 14:56:08.543932 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:56:08 crc kubenswrapper[5107]: I1209 14:56:08.543989 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:56:08 crc kubenswrapper[5107]: I1209 14:56:08.544002 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:56:08 crc kubenswrapper[5107]: E1209 14:56:08.544478 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:56:08 crc kubenswrapper[5107]: I1209 14:56:08.549160 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:56:08 crc kubenswrapper[5107]: I1209 14:56:08.942813 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:56:08 crc kubenswrapper[5107]: I1209 14:56:08.943405 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:56:08 crc kubenswrapper[5107]: I1209 14:56:08.943761 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:56:08 crc kubenswrapper[5107]: I1209 14:56:08.943827 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:56:08 crc kubenswrapper[5107]: I1209 14:56:08.943846 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:56:08 crc kubenswrapper[5107]: E1209 14:56:08.944456 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:56:08 crc kubenswrapper[5107]: I1209 14:56:08.944581 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:56:08 crc kubenswrapper[5107]: I1209 14:56:08.944630 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:56:08 crc kubenswrapper[5107]: I1209 14:56:08.944643 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:56:08 crc kubenswrapper[5107]: E1209 14:56:08.945117 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:56:08 crc kubenswrapper[5107]: E1209 14:56:08.945179 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Dec 09 14:56:08 crc kubenswrapper[5107]: I1209 14:56:08.959110 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 09 14:56:09 crc kubenswrapper[5107]: I1209 14:56:09.945467 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:56:09 crc kubenswrapper[5107]: I1209 14:56:09.946167 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:56:09 crc kubenswrapper[5107]: I1209 14:56:09.946201 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:56:09 crc kubenswrapper[5107]: I1209 14:56:09.946211 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:56:09 crc kubenswrapper[5107]: E1209 14:56:09.946706 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:56:10 crc kubenswrapper[5107]: E1209 14:56:10.164770 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 09 14:56:10 crc kubenswrapper[5107]: I1209 14:56:10.537317 5107 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": context deadline exceeded" start-of-body= Dec 09 14:56:10 crc kubenswrapper[5107]: I1209 14:56:10.537509 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded" Dec 09 14:56:11 crc kubenswrapper[5107]: I1209 14:56:11.854987 5107 trace.go:236] Trace[767853444]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Dec-2025 14:55:56.870) (total time: 14984ms): Dec 09 14:56:11 crc kubenswrapper[5107]: Trace[767853444]: ---"Objects listed" error:services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope 14984ms (14:56:11.854) Dec 09 14:56:11 crc kubenswrapper[5107]: Trace[767853444]: [14.984828534s] [14.984828534s] END Dec 09 14:56:11 crc kubenswrapper[5107]: E1209 14:56:11.855042 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 09 14:56:11 crc kubenswrapper[5107]: I1209 14:56:11.855056 5107 trace.go:236] Trace[23795932]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Dec-2025 14:56:00.824) (total time: 11030ms): Dec 09 14:56:11 crc kubenswrapper[5107]: Trace[23795932]: ---"Objects listed" error:nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope 11030ms (14:56:11.854) Dec 09 14:56:11 crc kubenswrapper[5107]: Trace[23795932]: [11.030126452s] [11.030126452s] END Dec 09 14:56:11 crc kubenswrapper[5107]: E1209 14:56:11.855110 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 09 14:56:11 crc kubenswrapper[5107]: I1209 14:56:11.855196 5107 trace.go:236] Trace[763085284]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Dec-2025 14:55:57.184) (total time: 14670ms): Dec 09 14:56:11 crc kubenswrapper[5107]: Trace[763085284]: ---"Objects listed" error:runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope 14670ms (14:56:11.855) Dec 09 14:56:11 crc kubenswrapper[5107]: Trace[763085284]: [14.670393535s] [14.670393535s] END Dec 09 14:56:11 crc kubenswrapper[5107]: E1209 14:56:11.855232 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 09 14:56:11 crc kubenswrapper[5107]: E1209 14:56:11.855979 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f93dc2885950a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:52.73445505 +0000 UTC m=+0.458159939,LastTimestamp:2025-12-09 14:55:52.73445505 +0000 UTC m=+0.458159939,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:11 crc kubenswrapper[5107]: E1209 14:56:11.856666 5107 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 09 14:56:11 crc kubenswrapper[5107]: E1209 14:56:11.857155 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f93dc2d421594 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:52.813917588 +0000 UTC m=+0.537622477,LastTimestamp:2025-12-09 14:55:52.813917588 +0000 UTC m=+0.537622477,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:11 crc kubenswrapper[5107]: E1209 14:56:11.859965 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f93dc2d4282ea default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:52.813945578 +0000 UTC m=+0.537650467,LastTimestamp:2025-12-09 14:55:52.813945578 +0000 UTC m=+0.537650467,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:11 crc kubenswrapper[5107]: E1209 14:56:11.861390 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f93dc2d42acfc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:52.813956348 +0000 UTC m=+0.537661237,LastTimestamp:2025-12-09 14:55:52.813956348 +0000 UTC m=+0.537661237,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:11 crc kubenswrapper[5107]: E1209 14:56:11.865884 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f93dc31d33e77 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:52.890539639 +0000 UTC m=+0.614244528,LastTimestamp:2025-12-09 14:55:52.890539639 +0000 UTC m=+0.614244528,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:11 crc kubenswrapper[5107]: E1209 14:56:11.871077 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f93dc2d421594\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f93dc2d421594 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:52.813917588 +0000 UTC m=+0.537622477,LastTimestamp:2025-12-09 14:55:52.917984916 +0000 UTC m=+0.641689805,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:11 crc kubenswrapper[5107]: E1209 14:56:11.876559 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f93dc2d4282ea\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f93dc2d4282ea default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:52.813945578 +0000 UTC m=+0.537650467,LastTimestamp:2025-12-09 14:55:52.918014087 +0000 UTC m=+0.641718976,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:11 crc kubenswrapper[5107]: I1209 14:56:11.879208 5107 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 09 14:56:11 crc kubenswrapper[5107]: E1209 14:56:11.885847 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f93dc2d42acfc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f93dc2d42acfc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:52.813956348 +0000 UTC m=+0.537661237,LastTimestamp:2025-12-09 14:55:52.918028787 +0000 UTC m=+0.641733666,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:11 crc kubenswrapper[5107]: E1209 14:56:11.900110 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f93dc2d421594\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f93dc2d421594 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:52.813917588 +0000 UTC m=+0.537622477,LastTimestamp:2025-12-09 14:55:52.919724855 +0000 UTC m=+0.643429744,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:11 crc kubenswrapper[5107]: E1209 14:56:11.905660 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f93dc2d4282ea\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f93dc2d4282ea default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:52.813945578 +0000 UTC m=+0.537650467,LastTimestamp:2025-12-09 14:55:52.919737916 +0000 UTC m=+0.643442795,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:11 crc kubenswrapper[5107]: E1209 14:56:11.910821 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f93dc2d42acfc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f93dc2d42acfc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:52.813956348 +0000 UTC m=+0.537661237,LastTimestamp:2025-12-09 14:55:52.919748636 +0000 UTC m=+0.643453525,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:11 crc kubenswrapper[5107]: E1209 14:56:11.916275 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f93dc2d421594\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f93dc2d421594 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:52.813917588 +0000 UTC m=+0.537622477,LastTimestamp:2025-12-09 14:55:52.919953179 +0000 UTC m=+0.643658068,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:11 crc kubenswrapper[5107]: E1209 14:56:11.925355 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f93dc2d4282ea\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f93dc2d4282ea default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:52.813945578 +0000 UTC m=+0.537650467,LastTimestamp:2025-12-09 14:55:52.919983089 +0000 UTC m=+0.643687978,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:11 crc kubenswrapper[5107]: E1209 14:56:11.930388 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f93dc2d42acfc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f93dc2d42acfc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:52.813956348 +0000 UTC m=+0.537661237,LastTimestamp:2025-12-09 14:55:52.91999713 +0000 UTC m=+0.643702019,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:11 crc kubenswrapper[5107]: E1209 14:56:11.935311 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f93dc2d421594\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f93dc2d421594 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:52.813917588 +0000 UTC m=+0.537622477,LastTimestamp:2025-12-09 14:55:52.920828843 +0000 UTC m=+0.644533732,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:11 crc kubenswrapper[5107]: E1209 14:56:11.940567 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f93dc2d4282ea\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f93dc2d4282ea default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:52.813945578 +0000 UTC m=+0.537650467,LastTimestamp:2025-12-09 14:55:52.920850793 +0000 UTC m=+0.644555682,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:11 crc kubenswrapper[5107]: E1209 14:56:11.946503 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f93dc2d42acfc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f93dc2d42acfc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:52.813956348 +0000 UTC m=+0.537661237,LastTimestamp:2025-12-09 14:55:52.920860153 +0000 UTC m=+0.644565042,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:11 crc kubenswrapper[5107]: E1209 14:56:11.951920 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f93dc2d421594\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f93dc2d421594 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:52.813917588 +0000 UTC m=+0.537622477,LastTimestamp:2025-12-09 14:55:52.92121087 +0000 UTC m=+0.644915759,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:11 crc kubenswrapper[5107]: E1209 14:56:11.956874 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f93dc2d4282ea\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f93dc2d4282ea default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:52.813945578 +0000 UTC m=+0.537650467,LastTimestamp:2025-12-09 14:55:52.92123255 +0000 UTC m=+0.644937439,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:11 crc kubenswrapper[5107]: E1209 14:56:11.961362 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f93dc2d42acfc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f93dc2d42acfc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:52.813956348 +0000 UTC m=+0.537661237,LastTimestamp:2025-12-09 14:55:52.921270661 +0000 UTC m=+0.644975550,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:11 crc kubenswrapper[5107]: E1209 14:56:11.967706 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f93dc2d421594\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f93dc2d421594 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:52.813917588 +0000 UTC m=+0.537622477,LastTimestamp:2025-12-09 14:55:52.922250096 +0000 UTC m=+0.645954985,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:11 crc kubenswrapper[5107]: E1209 14:56:11.972921 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f93dc2d4282ea\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f93dc2d4282ea default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:52.813945578 +0000 UTC m=+0.537650467,LastTimestamp:2025-12-09 14:55:52.922264296 +0000 UTC m=+0.645969325,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:11 crc kubenswrapper[5107]: E1209 14:56:11.983106 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f93dc2d42acfc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f93dc2d42acfc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:52.813956348 +0000 UTC m=+0.537661237,LastTimestamp:2025-12-09 14:55:52.922277896 +0000 UTC m=+0.645982785,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:11 crc kubenswrapper[5107]: E1209 14:56:11.988944 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f93dc2d421594\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f93dc2d421594 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:52.813917588 +0000 UTC m=+0.537622477,LastTimestamp:2025-12-09 14:55:52.922310757 +0000 UTC m=+0.646015646,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:11 crc kubenswrapper[5107]: E1209 14:56:11.994300 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f93dc2d4282ea\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f93dc2d4282ea default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:52.813945578 +0000 UTC m=+0.537650467,LastTimestamp:2025-12-09 14:55:52.922339717 +0000 UTC m=+0.646044606,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:11 crc kubenswrapper[5107]: E1209 14:56:11.999066 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f93dc49fe8ef6 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:53.296031478 +0000 UTC m=+1.019736367,LastTimestamp:2025-12-09 14:55:53.296031478 +0000 UTC m=+1.019736367,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.004222 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f93dc4a1b8fb1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:53.297932209 +0000 UTC m=+1.021637098,LastTimestamp:2025-12-09 14:55:53.297932209 +0000 UTC m=+1.021637098,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.009550 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f93dc4aa3bf81 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:53.306857345 +0000 UTC m=+1.030562234,LastTimestamp:2025-12-09 14:55:53.306857345 +0000 UTC m=+1.030562234,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.014840 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f93dc4c4e4221 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:53.334809121 +0000 UTC m=+1.058514010,LastTimestamp:2025-12-09 14:55:53.334809121 +0000 UTC m=+1.058514010,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.021468 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f93dc4c5356bc openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:53.335142076 +0000 UTC m=+1.058846975,LastTimestamp:2025-12-09 14:55:53.335142076 +0000 UTC m=+1.058846975,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.032282 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f93dc6a84d961 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:53.841703265 +0000 UTC m=+1.565408154,LastTimestamp:2025-12-09 14:55:53.841703265 +0000 UTC m=+1.565408154,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.037245 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f93dc6a870266 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:53.841844838 +0000 UTC m=+1.565549727,LastTimestamp:2025-12-09 14:55:53.841844838 +0000 UTC m=+1.565549727,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.041246 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f93dc6a894f14 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:53.84199554 +0000 UTC m=+1.565700429,LastTimestamp:2025-12-09 14:55:53.84199554 +0000 UTC m=+1.565700429,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.045769 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f93dc6aaa4ae4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:53.844157156 +0000 UTC m=+1.567862045,LastTimestamp:2025-12-09 14:55:53.844157156 +0000 UTC m=+1.567862045,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.050123 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f93dc6af56b78 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:53.849080696 +0000 UTC m=+1.572785575,LastTimestamp:2025-12-09 14:55:53.849080696 +0000 UTC m=+1.572785575,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.054447 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f93dc6b3a96ac openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:53.85361374 +0000 UTC m=+1.577318619,LastTimestamp:2025-12-09 14:55:53.85361374 +0000 UTC m=+1.577318619,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.060063 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f93dc6b484562 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:53.854510434 +0000 UTC m=+1.578215323,LastTimestamp:2025-12-09 14:55:53.854510434 +0000 UTC m=+1.578215323,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.062989 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f93dc6b4a78f3 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:53.854654707 +0000 UTC m=+1.578359586,LastTimestamp:2025-12-09 14:55:53.854654707 +0000 UTC m=+1.578359586,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.068252 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f93dc6b4cbea3 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:53.854803619 +0000 UTC m=+1.578508508,LastTimestamp:2025-12-09 14:55:53.854803619 +0000 UTC m=+1.578508508,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.074065 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f93dc6be96c79 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:53.865071737 +0000 UTC m=+1.588776626,LastTimestamp:2025-12-09 14:55:53.865071737 +0000 UTC m=+1.588776626,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.080793 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f93dc6c21ffe9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:53.868779497 +0000 UTC m=+1.592484386,LastTimestamp:2025-12-09 14:55:53.868779497 +0000 UTC m=+1.592484386,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.086849 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f93dc7c22e94c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:54.1372747 +0000 UTC m=+1.860979589,LastTimestamp:2025-12-09 14:55:54.1372747 +0000 UTC m=+1.860979589,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.092640 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f93dc7cfc2bbf openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:54.151513023 +0000 UTC m=+1.875217922,LastTimestamp:2025-12-09 14:55:54.151513023 +0000 UTC m=+1.875217922,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.097574 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f93dc7d119458 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:54.152916056 +0000 UTC m=+1.876620965,LastTimestamp:2025-12-09 14:55:54.152916056 +0000 UTC m=+1.876620965,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.103729 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f93dc9487c35a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:54.546537306 +0000 UTC m=+2.270242195,LastTimestamp:2025-12-09 14:55:54.546537306 +0000 UTC m=+2.270242195,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.114540 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f93dc95290357 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:54.557104983 +0000 UTC m=+2.280809912,LastTimestamp:2025-12-09 14:55:54.557104983 +0000 UTC m=+2.280809912,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.132468 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f93dc9541da07 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:54.558732807 +0000 UTC m=+2.282437706,LastTimestamp:2025-12-09 14:55:54.558732807 +0000 UTC m=+2.282437706,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.142563 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f93dca0c61f2e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:54.751950638 +0000 UTC m=+2.475655527,LastTimestamp:2025-12-09 14:55:54.751950638 +0000 UTC m=+2.475655527,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.149902 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f93dca186ff45 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:54.764590917 +0000 UTC m=+2.488295806,LastTimestamp:2025-12-09 14:55:54.764590917 +0000 UTC m=+2.488295806,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.155412 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f93dca6286a36 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:54.842278454 +0000 UTC m=+2.565983353,LastTimestamp:2025-12-09 14:55:54.842278454 +0000 UTC m=+2.565983353,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.160805 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f93dca6eeebe6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:54.855287782 +0000 UTC m=+2.578992681,LastTimestamp:2025-12-09 14:55:54.855287782 +0000 UTC m=+2.578992681,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.166019 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f93dca71ef8da openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:54.858436826 +0000 UTC m=+2.582141715,LastTimestamp:2025-12-09 14:55:54.858436826 +0000 UTC m=+2.582141715,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.172910 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f93dca73cb6c3 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:54.860385987 +0000 UTC m=+2.584090876,LastTimestamp:2025-12-09 14:55:54.860385987 +0000 UTC m=+2.584090876,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: I1209 14:56:12.177520 5107 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:37828->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 09 14:56:12 crc kubenswrapper[5107]: I1209 14:56:12.177593 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:37828->192.168.126.11:17697: read: connection reset by peer" Dec 09 14:56:12 crc kubenswrapper[5107]: I1209 14:56:12.177740 5107 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:37834->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 09 14:56:12 crc kubenswrapper[5107]: I1209 14:56:12.177763 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:37834->192.168.126.11:17697: read: connection reset by peer" Dec 09 14:56:12 crc kubenswrapper[5107]: I1209 14:56:12.177954 5107 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 09 14:56:12 crc kubenswrapper[5107]: I1209 14:56:12.177982 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.179503 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f93dcb89bac16 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:55.151821846 +0000 UTC m=+2.875526735,LastTimestamp:2025-12-09 14:55:55.151821846 +0000 UTC m=+2.875526735,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.186047 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f93dcb89be9ec openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:55.151837676 +0000 UTC m=+2.875542555,LastTimestamp:2025-12-09 14:55:55.151837676 +0000 UTC m=+2.875542555,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.191768 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f93dcb89d8ce0 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:55.151944928 +0000 UTC m=+2.875649817,LastTimestamp:2025-12-09 14:55:55.151944928 +0000 UTC m=+2.875649817,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.197169 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f93dcb90a36cb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:55.159066315 +0000 UTC m=+2.882771204,LastTimestamp:2025-12-09 14:55:55.159066315 +0000 UTC m=+2.882771204,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.203794 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f93dcb9806641 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:55.166811713 +0000 UTC m=+2.890516602,LastTimestamp:2025-12-09 14:55:55.166811713 +0000 UTC m=+2.890516602,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.218018 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f93dcb992fba9 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:55.168029609 +0000 UTC m=+2.891734498,LastTimestamp:2025-12-09 14:55:55.168029609 +0000 UTC m=+2.891734498,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.224970 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f93dcba28048d openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:55.177796749 +0000 UTC m=+2.901501638,LastTimestamp:2025-12-09 14:55:55.177796749 +0000 UTC m=+2.901501638,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.230069 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f93dcba3d759c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:55.179201948 +0000 UTC m=+2.902906837,LastTimestamp:2025-12-09 14:55:55.179201948 +0000 UTC m=+2.902906837,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.235273 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f93dcba3f1d0e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:55.17931035 +0000 UTC m=+2.903015269,LastTimestamp:2025-12-09 14:55:55.17931035 +0000 UTC m=+2.903015269,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.239704 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f93dcba58d7e0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:55.180996576 +0000 UTC m=+2.904701475,LastTimestamp:2025-12-09 14:55:55.180996576 +0000 UTC m=+2.904701475,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.243906 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f93dcc68009bc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:55.384891836 +0000 UTC m=+3.108596725,LastTimestamp:2025-12-09 14:55:55.384891836 +0000 UTC m=+3.108596725,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.249441 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f93dcc6a1c4b7 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:55.387102391 +0000 UTC m=+3.110807300,LastTimestamp:2025-12-09 14:55:55.387102391 +0000 UTC m=+3.110807300,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.254784 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f93dcc79918aa openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:55.403311274 +0000 UTC m=+3.127016163,LastTimestamp:2025-12-09 14:55:55.403311274 +0000 UTC m=+3.127016163,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.261414 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f93dcc7ac6686 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:55.40457639 +0000 UTC m=+3.128281279,LastTimestamp:2025-12-09 14:55:55.40457639 +0000 UTC m=+3.128281279,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.268128 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f93dcc7adfaca openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:55.404679882 +0000 UTC m=+3.128384761,LastTimestamp:2025-12-09 14:55:55.404679882 +0000 UTC m=+3.128384761,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.272567 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f93dcc7bc7802 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:55.405629442 +0000 UTC m=+3.129334331,LastTimestamp:2025-12-09 14:55:55.405629442 +0000 UTC m=+3.129334331,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.277004 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f93dcd424fb4f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:55.613805391 +0000 UTC m=+3.337510270,LastTimestamp:2025-12-09 14:55:55.613805391 +0000 UTC m=+3.337510270,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.282273 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f93dcd5297589 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:55.630876041 +0000 UTC m=+3.354580930,LastTimestamp:2025-12-09 14:55:55.630876041 +0000 UTC m=+3.354580930,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.287227 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f93dcd5a4458f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:55.638924687 +0000 UTC m=+3.362629576,LastTimestamp:2025-12-09 14:55:55.638924687 +0000 UTC m=+3.362629576,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.293631 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f93dcd5c068d5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:55.640768725 +0000 UTC m=+3.364473614,LastTimestamp:2025-12-09 14:55:55.640768725 +0000 UTC m=+3.364473614,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.297929 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f93dcd60df059 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:55.645849689 +0000 UTC m=+3.369554578,LastTimestamp:2025-12-09 14:55:55.645849689 +0000 UTC m=+3.369554578,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.302811 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f93dce34ef529 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:55.868214569 +0000 UTC m=+3.591919458,LastTimestamp:2025-12-09 14:55:55.868214569 +0000 UTC m=+3.591919458,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.310039 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f93dce36437bb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:55.869607867 +0000 UTC m=+3.593312756,LastTimestamp:2025-12-09 14:55:55.869607867 +0000 UTC m=+3.593312756,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.314620 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f93dce4ece08d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:55.895341197 +0000 UTC m=+3.619046096,LastTimestamp:2025-12-09 14:55:55.895341197 +0000 UTC m=+3.619046096,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.322989 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f93dce5127632 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:55.897804338 +0000 UTC m=+3.621509227,LastTimestamp:2025-12-09 14:55:55.897804338 +0000 UTC m=+3.621509227,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.327853 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f93dcf1a42795 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:56.108679061 +0000 UTC m=+3.832383950,LastTimestamp:2025-12-09 14:55:56.108679061 +0000 UTC m=+3.832383950,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.332086 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f93dcf1a6ec09 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:56.108860425 +0000 UTC m=+3.832565304,LastTimestamp:2025-12-09 14:55:56.108860425 +0000 UTC m=+3.832565304,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.336885 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f93dcf242d6d7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:56.119078615 +0000 UTC m=+3.842783504,LastTimestamp:2025-12-09 14:55:56.119078615 +0000 UTC m=+3.842783504,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.342140 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f93dcf29092bd openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:56.124172989 +0000 UTC m=+3.847877878,LastTimestamp:2025-12-09 14:55:56.124172989 +0000 UTC m=+3.847877878,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.349401 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f93dd203082f9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:56.889629433 +0000 UTC m=+4.613334322,LastTimestamp:2025-12-09 14:55:56.889629433 +0000 UTC m=+4.613334322,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.355527 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f93dd2c22cab7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:57.090056887 +0000 UTC m=+4.813761776,LastTimestamp:2025-12-09 14:55:57.090056887 +0000 UTC m=+4.813761776,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.360766 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f93dd2cdca022 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:57.102235682 +0000 UTC m=+4.825940571,LastTimestamp:2025-12-09 14:55:57.102235682 +0000 UTC m=+4.825940571,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.364533 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f93dd2cedaba9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:57.103352745 +0000 UTC m=+4.827057634,LastTimestamp:2025-12-09 14:55:57.103352745 +0000 UTC m=+4.827057634,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.369105 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f93dd38d5deff openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:57.303119615 +0000 UTC m=+5.026824544,LastTimestamp:2025-12-09 14:55:57.303119615 +0000 UTC m=+5.026824544,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.374775 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f93dd398f7900 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:57.3152832 +0000 UTC m=+5.038988089,LastTimestamp:2025-12-09 14:55:57.3152832 +0000 UTC m=+5.038988089,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.382125 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f93dd39a1b8db openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:57.316479195 +0000 UTC m=+5.040184084,LastTimestamp:2025-12-09 14:55:57.316479195 +0000 UTC m=+5.040184084,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.389748 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f93dd476d49b9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:57.547923897 +0000 UTC m=+5.271628786,LastTimestamp:2025-12-09 14:55:57.547923897 +0000 UTC m=+5.271628786,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.407409 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f93dd482f4f5b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:57.560639323 +0000 UTC m=+5.284344212,LastTimestamp:2025-12-09 14:55:57.560639323 +0000 UTC m=+5.284344212,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.414531 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f93dd4842c47a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:57.56191449 +0000 UTC m=+5.285619379,LastTimestamp:2025-12-09 14:55:57.56191449 +0000 UTC m=+5.285619379,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.420271 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f93dd54cacd9f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:57.772156319 +0000 UTC m=+5.495861208,LastTimestamp:2025-12-09 14:55:57.772156319 +0000 UTC m=+5.495861208,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.427047 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f93dd55ab8334 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:57.786882868 +0000 UTC m=+5.510587757,LastTimestamp:2025-12-09 14:55:57.786882868 +0000 UTC m=+5.510587757,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.431755 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f93dd55c2e7e2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:57.78841597 +0000 UTC m=+5.512120859,LastTimestamp:2025-12-09 14:55:57.78841597 +0000 UTC m=+5.512120859,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.437181 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f93dd622e4742 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:57.99677933 +0000 UTC m=+5.720484219,LastTimestamp:2025-12-09 14:55:57.99677933 +0000 UTC m=+5.720484219,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.442346 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f93dd630ec4db openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:58.011491547 +0000 UTC m=+5.735196436,LastTimestamp:2025-12-09 14:55:58.011491547 +0000 UTC m=+5.735196436,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.449032 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Dec 09 14:56:12 crc kubenswrapper[5107]: &Event{ObjectMeta:{kube-controller-manager-crc.187f93ddf987daa7 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": context deadline exceeded Dec 09 14:56:12 crc kubenswrapper[5107]: body: Dec 09 14:56:12 crc kubenswrapper[5107]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:56:00.536009383 +0000 UTC m=+8.259714262,LastTimestamp:2025-12-09 14:56:00.536009383 +0000 UTC m=+8.259714262,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 09 14:56:12 crc kubenswrapper[5107]: > Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.454142 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f93ddf98a8b1a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:56:00.536185626 +0000 UTC m=+8.259890535,LastTimestamp:2025-12-09 14:56:00.536185626 +0000 UTC m=+8.259890535,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.460382 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 09 14:56:12 crc kubenswrapper[5107]: &Event{ObjectMeta:{kube-apiserver-crc.187f93df727bc59b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Dec 09 14:56:12 crc kubenswrapper[5107]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 09 14:56:12 crc kubenswrapper[5107]: Dec 09 14:56:12 crc kubenswrapper[5107]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:56:06.860227995 +0000 UTC m=+14.583932884,LastTimestamp:2025-12-09 14:56:06.860227995 +0000 UTC m=+14.583932884,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 09 14:56:12 crc kubenswrapper[5107]: > Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.465382 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f93df727cc136 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:56:06.860292406 +0000 UTC m=+14.583997295,LastTimestamp:2025-12-09 14:56:06.860292406 +0000 UTC m=+14.583997295,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.472473 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f93df727bc59b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 09 14:56:12 crc kubenswrapper[5107]: &Event{ObjectMeta:{kube-apiserver-crc.187f93df727bc59b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Dec 09 14:56:12 crc kubenswrapper[5107]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 09 14:56:12 crc kubenswrapper[5107]: Dec 09 14:56:12 crc kubenswrapper[5107]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:56:06.860227995 +0000 UTC m=+14.583932884,LastTimestamp:2025-12-09 14:56:06.865892174 +0000 UTC m=+14.589597063,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 09 14:56:12 crc kubenswrapper[5107]: > Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.478039 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f93df727cc136\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f93df727cc136 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:56:06.860292406 +0000 UTC m=+14.583997295,LastTimestamp:2025-12-09 14:56:06.865956775 +0000 UTC m=+14.589661664,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.483720 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.187f93ddf987daa7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Dec 09 14:56:12 crc kubenswrapper[5107]: &Event{ObjectMeta:{kube-controller-manager-crc.187f93ddf987daa7 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": context deadline exceeded Dec 09 14:56:12 crc kubenswrapper[5107]: body: Dec 09 14:56:12 crc kubenswrapper[5107]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:56:00.536009383 +0000 UTC m=+8.259714262,LastTimestamp:2025-12-09 14:56:10.53746325 +0000 UTC m=+18.261168139,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 09 14:56:12 crc kubenswrapper[5107]: > Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.487801 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.187f93ddf98a8b1a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f93ddf98a8b1a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:56:00.536185626 +0000 UTC m=+8.259890535,LastTimestamp:2025-12-09 14:56:10.537545142 +0000 UTC m=+18.261250031,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.492801 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 09 14:56:12 crc kubenswrapper[5107]: &Event{ObjectMeta:{kube-apiserver-crc.187f93e0af6be96c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:37828->192.168.126.11:17697: read: connection reset by peer Dec 09 14:56:12 crc kubenswrapper[5107]: body: Dec 09 14:56:12 crc kubenswrapper[5107]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:56:12.17756606 +0000 UTC m=+19.901270949,LastTimestamp:2025-12-09 14:56:12.17756606 +0000 UTC m=+19.901270949,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 09 14:56:12 crc kubenswrapper[5107]: > Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.498169 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f93e0af6cae11 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:37828->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:56:12.177616401 +0000 UTC m=+19.901321290,LastTimestamp:2025-12-09 14:56:12.177616401 +0000 UTC m=+19.901321290,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.503506 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 09 14:56:12 crc kubenswrapper[5107]: &Event{ObjectMeta:{kube-apiserver-crc.187f93e0af6ec5b4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:37834->192.168.126.11:17697: read: connection reset by peer Dec 09 14:56:12 crc kubenswrapper[5107]: body: Dec 09 14:56:12 crc kubenswrapper[5107]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:56:12.177753524 +0000 UTC m=+19.901458413,LastTimestamp:2025-12-09 14:56:12.177753524 +0000 UTC m=+19.901458413,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 09 14:56:12 crc kubenswrapper[5107]: > Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.507880 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f93e0af6f1a64 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:37834->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:56:12.177775204 +0000 UTC m=+19.901480093,LastTimestamp:2025-12-09 14:56:12.177775204 +0000 UTC m=+19.901480093,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.512692 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 09 14:56:12 crc kubenswrapper[5107]: &Event{ObjectMeta:{kube-apiserver-crc.187f93e0af721fc4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Dec 09 14:56:12 crc kubenswrapper[5107]: body: Dec 09 14:56:12 crc kubenswrapper[5107]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:56:12.177973188 +0000 UTC m=+19.901678077,LastTimestamp:2025-12-09 14:56:12.177973188 +0000 UTC m=+19.901678077,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 09 14:56:12 crc kubenswrapper[5107]: > Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.517405 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f93e0af72733f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:56:12.177994559 +0000 UTC m=+19.901699448,LastTimestamp:2025-12-09 14:56:12.177994559 +0000 UTC m=+19.901699448,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:12 crc kubenswrapper[5107]: I1209 14:56:12.732379 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.893783 5107 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 09 14:56:12 crc kubenswrapper[5107]: I1209 14:56:12.955104 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 09 14:56:12 crc kubenswrapper[5107]: I1209 14:56:12.957548 5107 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="407c367cde40fc33fe02dd83c4e64894c2ed320f862d49d4687db4ceb0a006de" exitCode=255 Dec 09 14:56:12 crc kubenswrapper[5107]: I1209 14:56:12.957670 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"407c367cde40fc33fe02dd83c4e64894c2ed320f862d49d4687db4ceb0a006de"} Dec 09 14:56:12 crc kubenswrapper[5107]: I1209 14:56:12.958078 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:56:12 crc kubenswrapper[5107]: I1209 14:56:12.959002 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:56:12 crc kubenswrapper[5107]: I1209 14:56:12.959043 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:56:12 crc kubenswrapper[5107]: I1209 14:56:12.959054 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.959472 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:56:12 crc kubenswrapper[5107]: I1209 14:56:12.959885 5107 scope.go:117] "RemoveContainer" containerID="407c367cde40fc33fe02dd83c4e64894c2ed320f862d49d4687db4ceb0a006de" Dec 09 14:56:12 crc kubenswrapper[5107]: E1209 14:56:12.971382 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f93dce5127632\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f93dce5127632 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:55.897804338 +0000 UTC m=+3.621509227,LastTimestamp:2025-12-09 14:56:12.961235188 +0000 UTC m=+20.684940077,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:13 crc kubenswrapper[5107]: E1209 14:56:13.302743 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f93dcf1a42795\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f93dcf1a42795 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:56.108679061 +0000 UTC m=+3.832383950,LastTimestamp:2025-12-09 14:56:13.297549465 +0000 UTC m=+21.021254354,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:13 crc kubenswrapper[5107]: E1209 14:56:13.335869 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f93dcf242d6d7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f93dcf242d6d7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:56.119078615 +0000 UTC m=+3.842783504,LastTimestamp:2025-12-09 14:56:13.327524272 +0000 UTC m=+21.051229161,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:13 crc kubenswrapper[5107]: I1209 14:56:13.729740 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:56:13 crc kubenswrapper[5107]: I1209 14:56:13.962776 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 09 14:56:13 crc kubenswrapper[5107]: I1209 14:56:13.964997 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"acebf8fa3be61ebcde96b47a591e2729a9f0cffbb0694c5eab92998a8fc75d13"} Dec 09 14:56:13 crc kubenswrapper[5107]: I1209 14:56:13.965270 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:56:13 crc kubenswrapper[5107]: I1209 14:56:13.966017 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:56:13 crc kubenswrapper[5107]: I1209 14:56:13.966073 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:56:13 crc kubenswrapper[5107]: I1209 14:56:13.966086 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:56:13 crc kubenswrapper[5107]: E1209 14:56:13.966502 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:56:14 crc kubenswrapper[5107]: I1209 14:56:14.728528 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:56:14 crc kubenswrapper[5107]: I1209 14:56:14.970546 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 09 14:56:14 crc kubenswrapper[5107]: I1209 14:56:14.971153 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 09 14:56:14 crc kubenswrapper[5107]: I1209 14:56:14.973117 5107 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="acebf8fa3be61ebcde96b47a591e2729a9f0cffbb0694c5eab92998a8fc75d13" exitCode=255 Dec 09 14:56:14 crc kubenswrapper[5107]: I1209 14:56:14.973212 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"acebf8fa3be61ebcde96b47a591e2729a9f0cffbb0694c5eab92998a8fc75d13"} Dec 09 14:56:14 crc kubenswrapper[5107]: I1209 14:56:14.973306 5107 scope.go:117] "RemoveContainer" containerID="407c367cde40fc33fe02dd83c4e64894c2ed320f862d49d4687db4ceb0a006de" Dec 09 14:56:14 crc kubenswrapper[5107]: I1209 14:56:14.973599 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:56:14 crc kubenswrapper[5107]: I1209 14:56:14.974472 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:56:14 crc kubenswrapper[5107]: I1209 14:56:14.974514 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:56:14 crc kubenswrapper[5107]: I1209 14:56:14.974524 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:56:14 crc kubenswrapper[5107]: E1209 14:56:14.974966 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:56:14 crc kubenswrapper[5107]: I1209 14:56:14.975286 5107 scope.go:117] "RemoveContainer" containerID="acebf8fa3be61ebcde96b47a591e2729a9f0cffbb0694c5eab92998a8fc75d13" Dec 09 14:56:14 crc kubenswrapper[5107]: E1209 14:56:14.975538 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 09 14:56:14 crc kubenswrapper[5107]: E1209 14:56:14.981190 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f93e15630f928 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:56:14.975498536 +0000 UTC m=+22.699203425,LastTimestamp:2025-12-09 14:56:14.975498536 +0000 UTC m=+22.699203425,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:15 crc kubenswrapper[5107]: E1209 14:56:15.350464 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 09 14:56:15 crc kubenswrapper[5107]: E1209 14:56:15.386904 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 09 14:56:15 crc kubenswrapper[5107]: I1209 14:56:15.728888 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:56:15 crc kubenswrapper[5107]: I1209 14:56:15.977433 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 09 14:56:16 crc kubenswrapper[5107]: E1209 14:56:16.602169 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 09 14:56:16 crc kubenswrapper[5107]: I1209 14:56:16.728832 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:56:17 crc kubenswrapper[5107]: I1209 14:56:17.541999 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:56:17 crc kubenswrapper[5107]: I1209 14:56:17.542277 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:56:17 crc kubenswrapper[5107]: I1209 14:56:17.543409 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:56:17 crc kubenswrapper[5107]: I1209 14:56:17.543510 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:56:17 crc kubenswrapper[5107]: I1209 14:56:17.543526 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:56:17 crc kubenswrapper[5107]: E1209 14:56:17.544004 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:56:17 crc kubenswrapper[5107]: I1209 14:56:17.547073 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:56:17 crc kubenswrapper[5107]: I1209 14:56:17.730256 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:56:17 crc kubenswrapper[5107]: I1209 14:56:17.985222 5107 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:56:17 crc kubenswrapper[5107]: I1209 14:56:17.985507 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:56:17 crc kubenswrapper[5107]: I1209 14:56:17.986187 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:56:17 crc kubenswrapper[5107]: I1209 14:56:17.986389 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:56:17 crc kubenswrapper[5107]: I1209 14:56:17.986424 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:56:17 crc kubenswrapper[5107]: I1209 14:56:17.986436 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:56:17 crc kubenswrapper[5107]: E1209 14:56:17.986789 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:56:17 crc kubenswrapper[5107]: I1209 14:56:17.987102 5107 scope.go:117] "RemoveContainer" containerID="acebf8fa3be61ebcde96b47a591e2729a9f0cffbb0694c5eab92998a8fc75d13" Dec 09 14:56:17 crc kubenswrapper[5107]: I1209 14:56:17.987207 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:56:17 crc kubenswrapper[5107]: I1209 14:56:17.987269 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:56:17 crc kubenswrapper[5107]: I1209 14:56:17.987280 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:56:17 crc kubenswrapper[5107]: E1209 14:56:17.987300 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 09 14:56:17 crc kubenswrapper[5107]: E1209 14:56:17.987725 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:56:17 crc kubenswrapper[5107]: E1209 14:56:17.992785 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f93e15630f928\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f93e15630f928 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:56:14.975498536 +0000 UTC m=+22.699203425,LastTimestamp:2025-12-09 14:56:17.987276457 +0000 UTC m=+25.710981346,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:18 crc kubenswrapper[5107]: I1209 14:56:18.256993 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:56:18 crc kubenswrapper[5107]: I1209 14:56:18.258110 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:56:18 crc kubenswrapper[5107]: I1209 14:56:18.258166 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:56:18 crc kubenswrapper[5107]: I1209 14:56:18.258184 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:56:18 crc kubenswrapper[5107]: I1209 14:56:18.258215 5107 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 09 14:56:18 crc kubenswrapper[5107]: E1209 14:56:18.268379 5107 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 09 14:56:18 crc kubenswrapper[5107]: E1209 14:56:18.546239 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 09 14:56:18 crc kubenswrapper[5107]: I1209 14:56:18.727538 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:56:19 crc kubenswrapper[5107]: I1209 14:56:19.728638 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:56:20 crc kubenswrapper[5107]: I1209 14:56:20.731246 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:56:21 crc kubenswrapper[5107]: I1209 14:56:21.730100 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:56:22 crc kubenswrapper[5107]: E1209 14:56:22.358236 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 09 14:56:22 crc kubenswrapper[5107]: I1209 14:56:22.729053 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:56:22 crc kubenswrapper[5107]: E1209 14:56:22.894377 5107 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 09 14:56:22 crc kubenswrapper[5107]: E1209 14:56:22.894374 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 09 14:56:23 crc kubenswrapper[5107]: I1209 14:56:23.729730 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:56:23 crc kubenswrapper[5107]: I1209 14:56:23.966259 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:56:23 crc kubenswrapper[5107]: I1209 14:56:23.966635 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:56:23 crc kubenswrapper[5107]: I1209 14:56:23.967580 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:56:23 crc kubenswrapper[5107]: I1209 14:56:23.967619 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:56:23 crc kubenswrapper[5107]: I1209 14:56:23.967632 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:56:23 crc kubenswrapper[5107]: E1209 14:56:23.968010 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:56:23 crc kubenswrapper[5107]: I1209 14:56:23.968403 5107 scope.go:117] "RemoveContainer" containerID="acebf8fa3be61ebcde96b47a591e2729a9f0cffbb0694c5eab92998a8fc75d13" Dec 09 14:56:23 crc kubenswrapper[5107]: E1209 14:56:23.968652 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 09 14:56:23 crc kubenswrapper[5107]: E1209 14:56:23.974152 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f93e15630f928\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f93e15630f928 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:56:14.975498536 +0000 UTC m=+22.699203425,LastTimestamp:2025-12-09 14:56:23.968621315 +0000 UTC m=+31.692326204,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:24 crc kubenswrapper[5107]: E1209 14:56:24.679357 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 09 14:56:24 crc kubenswrapper[5107]: I1209 14:56:24.729544 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:56:24 crc kubenswrapper[5107]: E1209 14:56:24.793689 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 09 14:56:25 crc kubenswrapper[5107]: I1209 14:56:25.268553 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:56:25 crc kubenswrapper[5107]: I1209 14:56:25.269856 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:56:25 crc kubenswrapper[5107]: I1209 14:56:25.269920 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:56:25 crc kubenswrapper[5107]: I1209 14:56:25.269934 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:56:25 crc kubenswrapper[5107]: I1209 14:56:25.269965 5107 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 09 14:56:25 crc kubenswrapper[5107]: E1209 14:56:25.280967 5107 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 09 14:56:25 crc kubenswrapper[5107]: I1209 14:56:25.728926 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:56:26 crc kubenswrapper[5107]: I1209 14:56:26.729298 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:56:27 crc kubenswrapper[5107]: I1209 14:56:27.724301 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:56:28 crc kubenswrapper[5107]: I1209 14:56:28.729425 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:56:29 crc kubenswrapper[5107]: E1209 14:56:29.365024 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 09 14:56:29 crc kubenswrapper[5107]: I1209 14:56:29.731245 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:56:30 crc kubenswrapper[5107]: I1209 14:56:30.728728 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:56:31 crc kubenswrapper[5107]: I1209 14:56:31.728461 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:56:32 crc kubenswrapper[5107]: I1209 14:56:32.281684 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:56:32 crc kubenswrapper[5107]: I1209 14:56:32.282988 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:56:32 crc kubenswrapper[5107]: I1209 14:56:32.283039 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:56:32 crc kubenswrapper[5107]: I1209 14:56:32.283050 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:56:32 crc kubenswrapper[5107]: I1209 14:56:32.283092 5107 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 09 14:56:32 crc kubenswrapper[5107]: E1209 14:56:32.297841 5107 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 09 14:56:32 crc kubenswrapper[5107]: I1209 14:56:32.729817 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:56:32 crc kubenswrapper[5107]: E1209 14:56:32.895480 5107 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 09 14:56:33 crc kubenswrapper[5107]: I1209 14:56:33.729513 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:56:34 crc kubenswrapper[5107]: I1209 14:56:34.730090 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:56:35 crc kubenswrapper[5107]: I1209 14:56:35.730313 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:56:35 crc kubenswrapper[5107]: I1209 14:56:35.817610 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:56:35 crc kubenswrapper[5107]: I1209 14:56:35.819113 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:56:35 crc kubenswrapper[5107]: I1209 14:56:35.819194 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:56:35 crc kubenswrapper[5107]: I1209 14:56:35.819208 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:56:35 crc kubenswrapper[5107]: E1209 14:56:35.819736 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:56:35 crc kubenswrapper[5107]: I1209 14:56:35.820026 5107 scope.go:117] "RemoveContainer" containerID="acebf8fa3be61ebcde96b47a591e2729a9f0cffbb0694c5eab92998a8fc75d13" Dec 09 14:56:35 crc kubenswrapper[5107]: E1209 14:56:35.830073 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f93dce5127632\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f93dce5127632 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:55.897804338 +0000 UTC m=+3.621509227,LastTimestamp:2025-12-09 14:56:35.822487635 +0000 UTC m=+43.546192524,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:36 crc kubenswrapper[5107]: E1209 14:56:36.061848 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f93dcf1a42795\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f93dcf1a42795 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:56.108679061 +0000 UTC m=+3.832383950,LastTimestamp:2025-12-09 14:56:36.056057232 +0000 UTC m=+43.779762121,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:36 crc kubenswrapper[5107]: E1209 14:56:36.071432 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f93dcf242d6d7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f93dcf242d6d7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:55:56.119078615 +0000 UTC m=+3.842783504,LastTimestamp:2025-12-09 14:56:36.065673007 +0000 UTC m=+43.789377896,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:36 crc kubenswrapper[5107]: E1209 14:56:36.376656 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 09 14:56:36 crc kubenswrapper[5107]: I1209 14:56:36.729126 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:56:37 crc kubenswrapper[5107]: I1209 14:56:37.037085 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 09 14:56:37 crc kubenswrapper[5107]: I1209 14:56:37.039387 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"8d0900d52b3a2771989e26ae6a875866278ffebc101b6e3ce9eb68c2a5dee35b"} Dec 09 14:56:37 crc kubenswrapper[5107]: I1209 14:56:37.039650 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:56:37 crc kubenswrapper[5107]: I1209 14:56:37.040378 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:56:37 crc kubenswrapper[5107]: I1209 14:56:37.040468 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:56:37 crc kubenswrapper[5107]: I1209 14:56:37.040500 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:56:37 crc kubenswrapper[5107]: E1209 14:56:37.041234 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:56:37 crc kubenswrapper[5107]: I1209 14:56:37.729281 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:56:38 crc kubenswrapper[5107]: I1209 14:56:38.043880 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 09 14:56:38 crc kubenswrapper[5107]: I1209 14:56:38.044672 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 09 14:56:38 crc kubenswrapper[5107]: I1209 14:56:38.046705 5107 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="8d0900d52b3a2771989e26ae6a875866278ffebc101b6e3ce9eb68c2a5dee35b" exitCode=255 Dec 09 14:56:38 crc kubenswrapper[5107]: I1209 14:56:38.046774 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"8d0900d52b3a2771989e26ae6a875866278ffebc101b6e3ce9eb68c2a5dee35b"} Dec 09 14:56:38 crc kubenswrapper[5107]: I1209 14:56:38.046852 5107 scope.go:117] "RemoveContainer" containerID="acebf8fa3be61ebcde96b47a591e2729a9f0cffbb0694c5eab92998a8fc75d13" Dec 09 14:56:38 crc kubenswrapper[5107]: I1209 14:56:38.047116 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:56:38 crc kubenswrapper[5107]: I1209 14:56:38.047841 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:56:38 crc kubenswrapper[5107]: I1209 14:56:38.047884 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:56:38 crc kubenswrapper[5107]: I1209 14:56:38.047899 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:56:38 crc kubenswrapper[5107]: E1209 14:56:38.048275 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:56:38 crc kubenswrapper[5107]: I1209 14:56:38.048584 5107 scope.go:117] "RemoveContainer" containerID="8d0900d52b3a2771989e26ae6a875866278ffebc101b6e3ce9eb68c2a5dee35b" Dec 09 14:56:38 crc kubenswrapper[5107]: E1209 14:56:38.048774 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 09 14:56:38 crc kubenswrapper[5107]: E1209 14:56:38.054065 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f93e15630f928\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f93e15630f928 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:56:14.975498536 +0000 UTC m=+22.699203425,LastTimestamp:2025-12-09 14:56:38.048739873 +0000 UTC m=+45.772444762,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:38 crc kubenswrapper[5107]: I1209 14:56:38.728642 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:56:39 crc kubenswrapper[5107]: I1209 14:56:39.052009 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 09 14:56:39 crc kubenswrapper[5107]: I1209 14:56:39.299054 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:56:39 crc kubenswrapper[5107]: I1209 14:56:39.300286 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:56:39 crc kubenswrapper[5107]: I1209 14:56:39.300391 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:56:39 crc kubenswrapper[5107]: I1209 14:56:39.300427 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:56:39 crc kubenswrapper[5107]: I1209 14:56:39.300470 5107 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 09 14:56:39 crc kubenswrapper[5107]: E1209 14:56:39.312772 5107 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 09 14:56:39 crc kubenswrapper[5107]: I1209 14:56:39.727363 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:56:39 crc kubenswrapper[5107]: E1209 14:56:39.796613 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 09 14:56:40 crc kubenswrapper[5107]: I1209 14:56:40.732613 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:56:41 crc kubenswrapper[5107]: I1209 14:56:41.728939 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:56:42 crc kubenswrapper[5107]: I1209 14:56:42.730081 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:56:42 crc kubenswrapper[5107]: E1209 14:56:42.896756 5107 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 09 14:56:43 crc kubenswrapper[5107]: E1209 14:56:43.006175 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 09 14:56:43 crc kubenswrapper[5107]: E1209 14:56:43.381992 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 09 14:56:43 crc kubenswrapper[5107]: E1209 14:56:43.501513 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 09 14:56:43 crc kubenswrapper[5107]: I1209 14:56:43.730219 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:56:44 crc kubenswrapper[5107]: I1209 14:56:44.728521 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:56:45 crc kubenswrapper[5107]: I1209 14:56:45.732307 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:56:46 crc kubenswrapper[5107]: I1209 14:56:46.313895 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:56:46 crc kubenswrapper[5107]: I1209 14:56:46.315680 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:56:46 crc kubenswrapper[5107]: I1209 14:56:46.315766 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:56:46 crc kubenswrapper[5107]: I1209 14:56:46.315797 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:56:46 crc kubenswrapper[5107]: I1209 14:56:46.315847 5107 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 09 14:56:46 crc kubenswrapper[5107]: E1209 14:56:46.333828 5107 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 09 14:56:46 crc kubenswrapper[5107]: I1209 14:56:46.729046 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:56:47 crc kubenswrapper[5107]: I1209 14:56:47.040729 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:56:47 crc kubenswrapper[5107]: I1209 14:56:47.041928 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:56:47 crc kubenswrapper[5107]: I1209 14:56:47.043178 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:56:47 crc kubenswrapper[5107]: I1209 14:56:47.043250 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:56:47 crc kubenswrapper[5107]: I1209 14:56:47.043276 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:56:47 crc kubenswrapper[5107]: E1209 14:56:47.044069 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:56:47 crc kubenswrapper[5107]: I1209 14:56:47.044558 5107 scope.go:117] "RemoveContainer" containerID="8d0900d52b3a2771989e26ae6a875866278ffebc101b6e3ce9eb68c2a5dee35b" Dec 09 14:56:47 crc kubenswrapper[5107]: E1209 14:56:47.044896 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 09 14:56:47 crc kubenswrapper[5107]: E1209 14:56:47.053215 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f93e15630f928\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f93e15630f928 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:56:14.975498536 +0000 UTC m=+22.699203425,LastTimestamp:2025-12-09 14:56:47.044835745 +0000 UTC m=+54.768540664,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:47 crc kubenswrapper[5107]: I1209 14:56:47.730688 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:56:47 crc kubenswrapper[5107]: I1209 14:56:47.901209 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:56:47 crc kubenswrapper[5107]: I1209 14:56:47.901470 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:56:47 crc kubenswrapper[5107]: I1209 14:56:47.902755 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:56:47 crc kubenswrapper[5107]: I1209 14:56:47.902855 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:56:47 crc kubenswrapper[5107]: I1209 14:56:47.902884 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:56:47 crc kubenswrapper[5107]: E1209 14:56:47.903662 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:56:47 crc kubenswrapper[5107]: I1209 14:56:47.986161 5107 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:56:47 crc kubenswrapper[5107]: I1209 14:56:47.986803 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:56:47 crc kubenswrapper[5107]: I1209 14:56:47.988188 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:56:47 crc kubenswrapper[5107]: I1209 14:56:47.988276 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:56:47 crc kubenswrapper[5107]: I1209 14:56:47.988352 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:56:47 crc kubenswrapper[5107]: E1209 14:56:47.988846 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:56:47 crc kubenswrapper[5107]: I1209 14:56:47.989199 5107 scope.go:117] "RemoveContainer" containerID="8d0900d52b3a2771989e26ae6a875866278ffebc101b6e3ce9eb68c2a5dee35b" Dec 09 14:56:47 crc kubenswrapper[5107]: E1209 14:56:47.989536 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 09 14:56:47 crc kubenswrapper[5107]: E1209 14:56:47.996809 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f93e15630f928\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f93e15630f928 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:56:14.975498536 +0000 UTC m=+22.699203425,LastTimestamp:2025-12-09 14:56:47.98948945 +0000 UTC m=+55.713194339,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:56:48 crc kubenswrapper[5107]: I1209 14:56:48.729571 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:56:48 crc kubenswrapper[5107]: E1209 14:56:48.858882 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 09 14:56:49 crc kubenswrapper[5107]: I1209 14:56:49.729092 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:56:50 crc kubenswrapper[5107]: E1209 14:56:50.387732 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 09 14:56:50 crc kubenswrapper[5107]: I1209 14:56:50.729802 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:56:51 crc kubenswrapper[5107]: I1209 14:56:51.729566 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:56:52 crc kubenswrapper[5107]: I1209 14:56:52.728683 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:56:52 crc kubenswrapper[5107]: E1209 14:56:52.897456 5107 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 09 14:56:53 crc kubenswrapper[5107]: I1209 14:56:53.334989 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:56:53 crc kubenswrapper[5107]: I1209 14:56:53.336548 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:56:53 crc kubenswrapper[5107]: I1209 14:56:53.336612 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:56:53 crc kubenswrapper[5107]: I1209 14:56:53.336625 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:56:53 crc kubenswrapper[5107]: I1209 14:56:53.336655 5107 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 09 14:56:53 crc kubenswrapper[5107]: E1209 14:56:53.352848 5107 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 09 14:56:53 crc kubenswrapper[5107]: I1209 14:56:53.728669 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:56:54 crc kubenswrapper[5107]: I1209 14:56:54.729867 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:56:55 crc kubenswrapper[5107]: I1209 14:56:55.728049 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:56:56 crc kubenswrapper[5107]: I1209 14:56:56.729434 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:56:57 crc kubenswrapper[5107]: E1209 14:56:57.393794 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 09 14:56:57 crc kubenswrapper[5107]: I1209 14:56:57.728812 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:56:58 crc kubenswrapper[5107]: I1209 14:56:58.320172 5107 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-xvvmh" Dec 09 14:56:58 crc kubenswrapper[5107]: I1209 14:56:58.327037 5107 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-xvvmh" Dec 09 14:56:58 crc kubenswrapper[5107]: I1209 14:56:58.430217 5107 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 09 14:56:58 crc kubenswrapper[5107]: I1209 14:56:58.621927 5107 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 09 14:56:59 crc kubenswrapper[5107]: I1209 14:56:59.328487 5107 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-01-08 14:51:58 +0000 UTC" deadline="2025-12-30 16:23:26.360365657 +0000 UTC" Dec 09 14:56:59 crc kubenswrapper[5107]: I1209 14:56:59.328547 5107 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="505h26m27.031822676s" Dec 09 14:57:00 crc kubenswrapper[5107]: I1209 14:57:00.353854 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:57:00 crc kubenswrapper[5107]: I1209 14:57:00.355001 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:00 crc kubenswrapper[5107]: I1209 14:57:00.355163 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:00 crc kubenswrapper[5107]: I1209 14:57:00.355301 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:00 crc kubenswrapper[5107]: I1209 14:57:00.355580 5107 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 09 14:57:00 crc kubenswrapper[5107]: I1209 14:57:00.365062 5107 kubelet_node_status.go:127] "Node was previously registered" node="crc" Dec 09 14:57:00 crc kubenswrapper[5107]: I1209 14:57:00.365546 5107 kubelet_node_status.go:81] "Successfully registered node" node="crc" Dec 09 14:57:00 crc kubenswrapper[5107]: E1209 14:57:00.365631 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Dec 09 14:57:00 crc kubenswrapper[5107]: I1209 14:57:00.369805 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:00 crc kubenswrapper[5107]: I1209 14:57:00.369950 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:00 crc kubenswrapper[5107]: I1209 14:57:00.370102 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:00 crc kubenswrapper[5107]: I1209 14:57:00.370229 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:00 crc kubenswrapper[5107]: I1209 14:57:00.370371 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:00Z","lastTransitionTime":"2025-12-09T14:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:00 crc kubenswrapper[5107]: E1209 14:57:00.387944 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9b1559a0-2d18-46c4-a06d-382661d2a0c3\\\",\\\"systemUUID\\\":\\\"084757af-33e8-4017-8563-50553d5c8b31\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:00 crc kubenswrapper[5107]: I1209 14:57:00.395963 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:00 crc kubenswrapper[5107]: I1209 14:57:00.396009 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:00 crc kubenswrapper[5107]: I1209 14:57:00.396022 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:00 crc kubenswrapper[5107]: I1209 14:57:00.396044 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:00 crc kubenswrapper[5107]: I1209 14:57:00.396056 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:00Z","lastTransitionTime":"2025-12-09T14:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:00 crc kubenswrapper[5107]: E1209 14:57:00.406838 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9b1559a0-2d18-46c4-a06d-382661d2a0c3\\\",\\\"systemUUID\\\":\\\"084757af-33e8-4017-8563-50553d5c8b31\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:00 crc kubenswrapper[5107]: I1209 14:57:00.414596 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:00 crc kubenswrapper[5107]: I1209 14:57:00.414647 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:00 crc kubenswrapper[5107]: I1209 14:57:00.414658 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:00 crc kubenswrapper[5107]: I1209 14:57:00.414674 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:00 crc kubenswrapper[5107]: I1209 14:57:00.414687 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:00Z","lastTransitionTime":"2025-12-09T14:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:00 crc kubenswrapper[5107]: E1209 14:57:00.424185 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9b1559a0-2d18-46c4-a06d-382661d2a0c3\\\",\\\"systemUUID\\\":\\\"084757af-33e8-4017-8563-50553d5c8b31\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:00 crc kubenswrapper[5107]: I1209 14:57:00.432106 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:00 crc kubenswrapper[5107]: I1209 14:57:00.432145 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:00 crc kubenswrapper[5107]: I1209 14:57:00.432160 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:00 crc kubenswrapper[5107]: I1209 14:57:00.432178 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:00 crc kubenswrapper[5107]: I1209 14:57:00.432192 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:00Z","lastTransitionTime":"2025-12-09T14:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:00 crc kubenswrapper[5107]: E1209 14:57:00.443726 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9b1559a0-2d18-46c4-a06d-382661d2a0c3\\\",\\\"systemUUID\\\":\\\"084757af-33e8-4017-8563-50553d5c8b31\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:00 crc kubenswrapper[5107]: E1209 14:57:00.443923 5107 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 09 14:57:00 crc kubenswrapper[5107]: E1209 14:57:00.443956 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:00 crc kubenswrapper[5107]: E1209 14:57:00.544618 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:00 crc kubenswrapper[5107]: E1209 14:57:00.645695 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:00 crc kubenswrapper[5107]: E1209 14:57:00.746172 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:00 crc kubenswrapper[5107]: E1209 14:57:00.846729 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:00 crc kubenswrapper[5107]: E1209 14:57:00.946966 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:01 crc kubenswrapper[5107]: E1209 14:57:01.047865 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:01 crc kubenswrapper[5107]: E1209 14:57:01.148799 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:01 crc kubenswrapper[5107]: E1209 14:57:01.249551 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:01 crc kubenswrapper[5107]: E1209 14:57:01.350516 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:01 crc kubenswrapper[5107]: E1209 14:57:01.451477 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:01 crc kubenswrapper[5107]: E1209 14:57:01.552096 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:01 crc kubenswrapper[5107]: E1209 14:57:01.653183 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:01 crc kubenswrapper[5107]: E1209 14:57:01.754118 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:01 crc kubenswrapper[5107]: I1209 14:57:01.817585 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:57:01 crc kubenswrapper[5107]: I1209 14:57:01.818808 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:01 crc kubenswrapper[5107]: I1209 14:57:01.818894 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:01 crc kubenswrapper[5107]: I1209 14:57:01.818913 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:01 crc kubenswrapper[5107]: E1209 14:57:01.819639 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:57:01 crc kubenswrapper[5107]: I1209 14:57:01.819918 5107 scope.go:117] "RemoveContainer" containerID="8d0900d52b3a2771989e26ae6a875866278ffebc101b6e3ce9eb68c2a5dee35b" Dec 09 14:57:01 crc kubenswrapper[5107]: E1209 14:57:01.855686 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:01 crc kubenswrapper[5107]: E1209 14:57:01.956380 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:02 crc kubenswrapper[5107]: E1209 14:57:02.057493 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:02 crc kubenswrapper[5107]: I1209 14:57:02.120482 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 09 14:57:02 crc kubenswrapper[5107]: I1209 14:57:02.122913 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"dcfe82d78bf385817437f96e0cd90ec7f0e152520e5d2289ea61ded6ec88972e"} Dec 09 14:57:02 crc kubenswrapper[5107]: I1209 14:57:02.123253 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:57:02 crc kubenswrapper[5107]: I1209 14:57:02.123875 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:02 crc kubenswrapper[5107]: I1209 14:57:02.123920 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:02 crc kubenswrapper[5107]: I1209 14:57:02.123932 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:02 crc kubenswrapper[5107]: E1209 14:57:02.124533 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:57:02 crc kubenswrapper[5107]: E1209 14:57:02.158313 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:02 crc kubenswrapper[5107]: E1209 14:57:02.259218 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:02 crc kubenswrapper[5107]: E1209 14:57:02.360078 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:02 crc kubenswrapper[5107]: E1209 14:57:02.460826 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:02 crc kubenswrapper[5107]: E1209 14:57:02.561311 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:02 crc kubenswrapper[5107]: E1209 14:57:02.662072 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:02 crc kubenswrapper[5107]: E1209 14:57:02.762412 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:02 crc kubenswrapper[5107]: E1209 14:57:02.863177 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:02 crc kubenswrapper[5107]: E1209 14:57:02.898631 5107 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 09 14:57:02 crc kubenswrapper[5107]: E1209 14:57:02.963328 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:03 crc kubenswrapper[5107]: E1209 14:57:03.063935 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:03 crc kubenswrapper[5107]: I1209 14:57:03.127086 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 09 14:57:03 crc kubenswrapper[5107]: I1209 14:57:03.127688 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 09 14:57:03 crc kubenswrapper[5107]: I1209 14:57:03.129555 5107 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="dcfe82d78bf385817437f96e0cd90ec7f0e152520e5d2289ea61ded6ec88972e" exitCode=255 Dec 09 14:57:03 crc kubenswrapper[5107]: I1209 14:57:03.129590 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"dcfe82d78bf385817437f96e0cd90ec7f0e152520e5d2289ea61ded6ec88972e"} Dec 09 14:57:03 crc kubenswrapper[5107]: I1209 14:57:03.129667 5107 scope.go:117] "RemoveContainer" containerID="8d0900d52b3a2771989e26ae6a875866278ffebc101b6e3ce9eb68c2a5dee35b" Dec 09 14:57:03 crc kubenswrapper[5107]: I1209 14:57:03.129876 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:57:03 crc kubenswrapper[5107]: I1209 14:57:03.130472 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:03 crc kubenswrapper[5107]: I1209 14:57:03.130514 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:03 crc kubenswrapper[5107]: I1209 14:57:03.130528 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:03 crc kubenswrapper[5107]: E1209 14:57:03.131007 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:57:03 crc kubenswrapper[5107]: I1209 14:57:03.131357 5107 scope.go:117] "RemoveContainer" containerID="dcfe82d78bf385817437f96e0cd90ec7f0e152520e5d2289ea61ded6ec88972e" Dec 09 14:57:03 crc kubenswrapper[5107]: E1209 14:57:03.131619 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 09 14:57:03 crc kubenswrapper[5107]: E1209 14:57:03.164066 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:03 crc kubenswrapper[5107]: E1209 14:57:03.264932 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:03 crc kubenswrapper[5107]: E1209 14:57:03.365173 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:03 crc kubenswrapper[5107]: E1209 14:57:03.466078 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:03 crc kubenswrapper[5107]: E1209 14:57:03.567236 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:03 crc kubenswrapper[5107]: E1209 14:57:03.668326 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:03 crc kubenswrapper[5107]: E1209 14:57:03.768774 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:03 crc kubenswrapper[5107]: E1209 14:57:03.869274 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:03 crc kubenswrapper[5107]: E1209 14:57:03.970505 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:04 crc kubenswrapper[5107]: E1209 14:57:04.071502 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:04 crc kubenswrapper[5107]: I1209 14:57:04.133149 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 09 14:57:04 crc kubenswrapper[5107]: E1209 14:57:04.172724 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:04 crc kubenswrapper[5107]: E1209 14:57:04.273792 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:04 crc kubenswrapper[5107]: E1209 14:57:04.393674 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:04 crc kubenswrapper[5107]: E1209 14:57:04.494752 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:04 crc kubenswrapper[5107]: E1209 14:57:04.595286 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:04 crc kubenswrapper[5107]: E1209 14:57:04.696348 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:04 crc kubenswrapper[5107]: E1209 14:57:04.797410 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:04 crc kubenswrapper[5107]: E1209 14:57:04.898542 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:04 crc kubenswrapper[5107]: E1209 14:57:04.999645 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:05 crc kubenswrapper[5107]: E1209 14:57:05.765499 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:05 crc kubenswrapper[5107]: E1209 14:57:05.865881 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:05 crc kubenswrapper[5107]: E1209 14:57:05.966695 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:06 crc kubenswrapper[5107]: E1209 14:57:06.067722 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:06 crc kubenswrapper[5107]: E1209 14:57:06.168313 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:06 crc kubenswrapper[5107]: E1209 14:57:06.269201 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:06 crc kubenswrapper[5107]: E1209 14:57:06.370320 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:06 crc kubenswrapper[5107]: E1209 14:57:06.471176 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:06 crc kubenswrapper[5107]: E1209 14:57:06.572160 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:06 crc kubenswrapper[5107]: E1209 14:57:06.673385 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:06 crc kubenswrapper[5107]: E1209 14:57:06.773979 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:06 crc kubenswrapper[5107]: E1209 14:57:06.874744 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:06 crc kubenswrapper[5107]: E1209 14:57:06.975677 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:07 crc kubenswrapper[5107]: E1209 14:57:07.076840 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:07 crc kubenswrapper[5107]: E1209 14:57:07.177149 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:07 crc kubenswrapper[5107]: E1209 14:57:07.278451 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:07 crc kubenswrapper[5107]: E1209 14:57:07.379538 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:07 crc kubenswrapper[5107]: E1209 14:57:07.480441 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:07 crc kubenswrapper[5107]: E1209 14:57:07.580911 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:07 crc kubenswrapper[5107]: E1209 14:57:07.682057 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:07 crc kubenswrapper[5107]: E1209 14:57:07.783199 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:07 crc kubenswrapper[5107]: E1209 14:57:07.884029 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:07 crc kubenswrapper[5107]: E1209 14:57:07.984235 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:07 crc kubenswrapper[5107]: I1209 14:57:07.985482 5107 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:57:07 crc kubenswrapper[5107]: I1209 14:57:07.985817 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:57:07 crc kubenswrapper[5107]: I1209 14:57:07.986890 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:07 crc kubenswrapper[5107]: I1209 14:57:07.986951 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:07 crc kubenswrapper[5107]: I1209 14:57:07.986974 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:07 crc kubenswrapper[5107]: E1209 14:57:07.987727 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:57:07 crc kubenswrapper[5107]: I1209 14:57:07.988085 5107 scope.go:117] "RemoveContainer" containerID="dcfe82d78bf385817437f96e0cd90ec7f0e152520e5d2289ea61ded6ec88972e" Dec 09 14:57:07 crc kubenswrapper[5107]: E1209 14:57:07.988389 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 09 14:57:08 crc kubenswrapper[5107]: E1209 14:57:08.084892 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:08 crc kubenswrapper[5107]: E1209 14:57:08.186109 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:08 crc kubenswrapper[5107]: E1209 14:57:08.286753 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:08 crc kubenswrapper[5107]: E1209 14:57:08.387631 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:08 crc kubenswrapper[5107]: E1209 14:57:08.488616 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:08 crc kubenswrapper[5107]: E1209 14:57:08.589454 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:08 crc kubenswrapper[5107]: E1209 14:57:08.690537 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:08 crc kubenswrapper[5107]: E1209 14:57:08.790869 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:08 crc kubenswrapper[5107]: E1209 14:57:08.891379 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:08 crc kubenswrapper[5107]: E1209 14:57:08.991743 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:09 crc kubenswrapper[5107]: E1209 14:57:09.092663 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:09 crc kubenswrapper[5107]: E1209 14:57:09.193005 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:09 crc kubenswrapper[5107]: E1209 14:57:09.293948 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:09 crc kubenswrapper[5107]: E1209 14:57:09.394320 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:09 crc kubenswrapper[5107]: E1209 14:57:09.495069 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:09 crc kubenswrapper[5107]: E1209 14:57:09.596171 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:09 crc kubenswrapper[5107]: E1209 14:57:09.697411 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:09 crc kubenswrapper[5107]: E1209 14:57:09.798154 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:09 crc kubenswrapper[5107]: E1209 14:57:09.898732 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:09 crc kubenswrapper[5107]: E1209 14:57:09.999420 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:10 crc kubenswrapper[5107]: E1209 14:57:10.099611 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:10 crc kubenswrapper[5107]: E1209 14:57:10.200430 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:10 crc kubenswrapper[5107]: E1209 14:57:10.301480 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:10 crc kubenswrapper[5107]: E1209 14:57:10.402193 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:10 crc kubenswrapper[5107]: E1209 14:57:10.502955 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:10 crc kubenswrapper[5107]: E1209 14:57:10.604114 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:10 crc kubenswrapper[5107]: E1209 14:57:10.704567 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:10 crc kubenswrapper[5107]: E1209 14:57:10.805495 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:10 crc kubenswrapper[5107]: E1209 14:57:10.832001 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Dec 09 14:57:10 crc kubenswrapper[5107]: I1209 14:57:10.836871 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:10 crc kubenswrapper[5107]: I1209 14:57:10.836921 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:10 crc kubenswrapper[5107]: I1209 14:57:10.836932 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:10 crc kubenswrapper[5107]: I1209 14:57:10.836949 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:10 crc kubenswrapper[5107]: I1209 14:57:10.836967 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:10Z","lastTransitionTime":"2025-12-09T14:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:10 crc kubenswrapper[5107]: E1209 14:57:10.851306 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9b1559a0-2d18-46c4-a06d-382661d2a0c3\\\",\\\"systemUUID\\\":\\\"084757af-33e8-4017-8563-50553d5c8b31\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:10 crc kubenswrapper[5107]: I1209 14:57:10.864478 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:10 crc kubenswrapper[5107]: I1209 14:57:10.864578 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:10 crc kubenswrapper[5107]: I1209 14:57:10.864603 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:10 crc kubenswrapper[5107]: I1209 14:57:10.864641 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:10 crc kubenswrapper[5107]: I1209 14:57:10.864669 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:10Z","lastTransitionTime":"2025-12-09T14:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:10 crc kubenswrapper[5107]: E1209 14:57:10.880102 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9b1559a0-2d18-46c4-a06d-382661d2a0c3\\\",\\\"systemUUID\\\":\\\"084757af-33e8-4017-8563-50553d5c8b31\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:10 crc kubenswrapper[5107]: I1209 14:57:10.889410 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:10 crc kubenswrapper[5107]: I1209 14:57:10.889461 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:10 crc kubenswrapper[5107]: I1209 14:57:10.889471 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:10 crc kubenswrapper[5107]: I1209 14:57:10.889486 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:10 crc kubenswrapper[5107]: I1209 14:57:10.889497 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:10Z","lastTransitionTime":"2025-12-09T14:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:10 crc kubenswrapper[5107]: E1209 14:57:10.899909 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9b1559a0-2d18-46c4-a06d-382661d2a0c3\\\",\\\"systemUUID\\\":\\\"084757af-33e8-4017-8563-50553d5c8b31\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:10 crc kubenswrapper[5107]: I1209 14:57:10.908529 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:10 crc kubenswrapper[5107]: I1209 14:57:10.908604 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:10 crc kubenswrapper[5107]: I1209 14:57:10.908616 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:10 crc kubenswrapper[5107]: I1209 14:57:10.908641 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:10 crc kubenswrapper[5107]: I1209 14:57:10.908663 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:10Z","lastTransitionTime":"2025-12-09T14:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:10 crc kubenswrapper[5107]: E1209 14:57:10.920152 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9b1559a0-2d18-46c4-a06d-382661d2a0c3\\\",\\\"systemUUID\\\":\\\"084757af-33e8-4017-8563-50553d5c8b31\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:10 crc kubenswrapper[5107]: E1209 14:57:10.920534 5107 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 09 14:57:10 crc kubenswrapper[5107]: E1209 14:57:10.920579 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:11 crc kubenswrapper[5107]: E1209 14:57:11.020657 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:11 crc kubenswrapper[5107]: E1209 14:57:11.121270 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:11 crc kubenswrapper[5107]: E1209 14:57:11.221973 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:11 crc kubenswrapper[5107]: E1209 14:57:11.322645 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:11 crc kubenswrapper[5107]: E1209 14:57:11.424151 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:11 crc kubenswrapper[5107]: E1209 14:57:11.525025 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:11 crc kubenswrapper[5107]: E1209 14:57:11.625993 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:11 crc kubenswrapper[5107]: E1209 14:57:11.727359 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:11 crc kubenswrapper[5107]: E1209 14:57:11.827961 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:11 crc kubenswrapper[5107]: E1209 14:57:11.928429 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:12 crc kubenswrapper[5107]: E1209 14:57:12.029618 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:12 crc kubenswrapper[5107]: I1209 14:57:12.123948 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:57:12 crc kubenswrapper[5107]: I1209 14:57:12.124445 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:57:12 crc kubenswrapper[5107]: I1209 14:57:12.125604 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:12 crc kubenswrapper[5107]: I1209 14:57:12.125676 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:12 crc kubenswrapper[5107]: I1209 14:57:12.125699 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:12 crc kubenswrapper[5107]: E1209 14:57:12.126585 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:57:12 crc kubenswrapper[5107]: I1209 14:57:12.127019 5107 scope.go:117] "RemoveContainer" containerID="dcfe82d78bf385817437f96e0cd90ec7f0e152520e5d2289ea61ded6ec88972e" Dec 09 14:57:12 crc kubenswrapper[5107]: E1209 14:57:12.127378 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 09 14:57:12 crc kubenswrapper[5107]: E1209 14:57:12.130282 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:12 crc kubenswrapper[5107]: E1209 14:57:12.230934 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:12 crc kubenswrapper[5107]: E1209 14:57:12.331436 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:12 crc kubenswrapper[5107]: E1209 14:57:12.432620 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:12 crc kubenswrapper[5107]: E1209 14:57:12.533790 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:12 crc kubenswrapper[5107]: E1209 14:57:12.634363 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:12 crc kubenswrapper[5107]: E1209 14:57:12.734872 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:12 crc kubenswrapper[5107]: E1209 14:57:12.835860 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:12 crc kubenswrapper[5107]: E1209 14:57:12.899053 5107 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 09 14:57:12 crc kubenswrapper[5107]: E1209 14:57:12.936034 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:13 crc kubenswrapper[5107]: E1209 14:57:13.036648 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:13 crc kubenswrapper[5107]: E1209 14:57:13.136814 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:13 crc kubenswrapper[5107]: E1209 14:57:13.237477 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.283271 5107 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Dec 09 14:57:13 crc kubenswrapper[5107]: E1209 14:57:13.338748 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:13 crc kubenswrapper[5107]: E1209 14:57:13.439837 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:13 crc kubenswrapper[5107]: E1209 14:57:13.540170 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.638844 5107 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.642863 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.643151 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.643381 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.643604 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.643793 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:13Z","lastTransitionTime":"2025-12-09T14:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.738354 5107 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.745991 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.746054 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.746071 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.746096 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.746113 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:13Z","lastTransitionTime":"2025-12-09T14:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.749479 5107 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.771142 5107 apiserver.go:52] "Watching apiserver" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.775275 5107 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.775828 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-9jq8t","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-operator/iptables-alerter-5jnd7","openshift-dns/node-resolver-hk6gf","openshift-multus/multus-additional-cni-plugins-s44qp","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-6zphj","openshift-image-registry/node-ca-gfdn8","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-multus/network-metrics-daemon-6xk48","openshift-network-node-identity/network-node-identity-dgvkt","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-ovn-kubernetes/ovnkube-node-9rjcr","openshift-multus/multus-g7sv4","openshift-network-diagnostics/network-check-target-fhkjl"] Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.777061 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.777907 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.777973 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:57:13 crc kubenswrapper[5107]: E1209 14:57:13.778195 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:57:13 crc kubenswrapper[5107]: E1209 14:57:13.778403 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.778869 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.779928 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:57:13 crc kubenswrapper[5107]: E1209 14:57:13.779977 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.780779 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.781217 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.783290 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.784403 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.784805 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.785407 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.785490 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.786918 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.790674 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.792404 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.795320 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-hk6gf" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.795556 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-gfdn8" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.796965 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.798516 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.798539 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.798612 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.798549 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.799017 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.799819 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.801144 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.807201 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-6zphj" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.809599 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.809987 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.810173 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.810378 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.810639 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.810732 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.812781 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.818180 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6xk48" Dec 09 14:57:13 crc kubenswrapper[5107]: E1209 14:57:13.818304 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6xk48" podUID="f154303d-e14b-4854-8f94-194d0f338f98" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.818397 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.822177 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.822323 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.824728 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.825082 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.825098 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.825744 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.827041 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.828983 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.829036 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.830100 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-g7sv4" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.830860 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.831432 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.831833 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.831999 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.832172 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.832301 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.832887 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-s44qp" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.833965 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.834479 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.835052 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.839987 5107 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.842517 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.848704 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.848811 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.848821 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.848836 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.848845 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:13Z","lastTransitionTime":"2025-12-09T14:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.852237 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.852584 5107 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.854462 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.864181 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.871780 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-hk6gf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aac14e3-5594-400a-a5f6-f00359244626\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gcnzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hk6gf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.879492 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gfdn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f91a655-0e59-4855-bb0c-acbc64e10ed7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq48q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gfdn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.890101 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.898904 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-g7sv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"357946f5-b5ee-4739-a2c3-62beb5aedb57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qr2g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g7sv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.908615 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.919380 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-6zphj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"035458af-eba0-4241-bcac-4e11d6358b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbbvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbbvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-6zphj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.921725 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.921813 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.921839 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.921865 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.921890 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.922059 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.922498 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.922670 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.922722 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.922676 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.922688 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.922749 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.922795 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.922997 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.923001 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.923466 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.923546 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.922840 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.923660 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.923712 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.923731 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.923736 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.923826 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.923881 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.923908 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.923956 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.923980 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.924015 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.924029 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.924061 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.924107 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.924296 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.924325 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.924399 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.924021 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.924561 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.924437 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.924991 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.925441 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.925468 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.925526 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.925779 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.926130 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.926315 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.926903 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.926724 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.926910 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.926854 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.926894 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.926927 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.927038 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.927061 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.927081 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.927101 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.927119 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.927139 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.927283 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.927306 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.927327 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.927358 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.927373 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.927377 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.927392 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.927414 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.927431 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.927449 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.927465 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.927481 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.927496 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.927514 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.927529 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.927554 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.927575 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.928087 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.928196 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.928225 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.928249 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.928594 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.928692 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.928909 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.928929 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.928963 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.928993 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.929070 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.929129 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.929318 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.929382 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.929404 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.929456 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.929488 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.929636 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.929668 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.930120 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.930230 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.930263 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.930771 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.929807 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.929808 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.929832 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.929979 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.930835 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.930056 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.930248 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.930660 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.930738 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.931297 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.931715 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.931329 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.931807 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.931806 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.931820 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.931871 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.931910 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.932256 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.932308 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.932229 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s44qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"468a62a3-c55d-40e0-bc1f-d01a979f017a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s44qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.932367 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.932383 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.932412 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.932455 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.932472 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.932490 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.932525 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.932545 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.932565 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.932584 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.932701 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.932757 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.932802 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.932949 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.933057 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.933124 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.933316 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.933394 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.933427 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.933518 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.933558 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.933616 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.933667 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.933669 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.933692 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.933709 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.933784 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.933818 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.933834 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.933844 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.933869 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.933894 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.933917 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.933940 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.933963 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.933961 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.934052 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.934084 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.934114 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.934138 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.934161 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.934188 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.934198 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.934219 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.934246 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.934276 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.934305 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.934353 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.934371 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.934386 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.934417 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.934443 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.934470 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.934500 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.934524 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.934628 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.934659 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.934691 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.934719 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.934743 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.934768 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.934800 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.934825 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.934851 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.934875 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.934897 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.934923 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.934948 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.934971 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.934996 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.935020 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.935047 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.935072 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.935099 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.935121 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.935142 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.935167 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.935196 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.935220 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.935246 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.935269 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.935298 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.934662 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.934728 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.934737 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.935010 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.935034 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.935030 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.935277 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.937431 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.935303 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.935582 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.935519 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.936096 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.936238 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.936386 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.936433 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.936501 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.936887 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.936919 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.937258 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.937640 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.937683 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.937711 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.937740 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.937769 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.937797 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.937827 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.937853 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.937881 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.937913 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.937938 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.937962 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.937984 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.938011 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.938036 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.938061 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.938091 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.938115 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.938141 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.938163 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.938184 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.938207 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.938237 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.938258 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.938379 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.938403 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.938425 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.938451 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.938474 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.938496 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.938521 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.938550 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.938600 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.938626 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.938650 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.938674 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.938698 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.938723 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.938745 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.938769 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.938792 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.938815 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.938841 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.938864 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.938888 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.938914 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.938935 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.938962 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.939017 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.939041 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.939066 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.939090 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.939113 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.939137 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.939163 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.939188 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.939212 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.939238 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.939260 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.939285 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.939313 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.939357 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.939574 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.939603 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.939654 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.939682 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.939704 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.939730 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.939754 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.939779 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.939806 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.939832 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.939858 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.939884 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.939908 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.939934 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.939959 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.939988 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.940015 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.940046 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.940078 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.940106 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.940132 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.940155 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.940182 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.940207 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.940664 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.940692 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.940715 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.937266 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.938475 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.939208 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.939750 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.940013 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.940022 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.940202 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.940387 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.940617 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.940638 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.940724 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.940814 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: E1209 14:57:13.941744 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:57:14.441716278 +0000 UTC m=+82.165421167 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.942164 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.942259 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.942296 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.942302 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.942933 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.940914 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.942940 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.940982 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.941151 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.941214 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.943518 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.943516 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.943617 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.941405 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.941525 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.942954 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.943081 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.943225 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.943252 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.943367 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.943460 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.943926 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.944037 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.945128 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.945162 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.944469 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.944699 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.945393 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.945406 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.945468 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.945804 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.946047 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.946495 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.946527 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.946637 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.943370 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.946747 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.946767 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.946789 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.946582 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.946830 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.946897 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.946981 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.947040 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.947080 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.946317 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.947123 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.947223 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.947263 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.947295 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.947327 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.947416 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/357946f5-b5ee-4739-a2c3-62beb5aedb57-host-run-multus-certs\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.947448 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hsll\" (UniqueName: \"kubernetes.io/projected/468a62a3-c55d-40e0-bc1f-d01a979f017a-kube-api-access-4hsll\") pod \"multus-additional-cni-plugins-s44qp\" (UID: \"468a62a3-c55d-40e0-bc1f-d01a979f017a\") " pod="openshift-multus/multus-additional-cni-plugins-s44qp" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.947479 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.947507 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/902902bc-6dc6-4c5f-8e1b-9399b7c813c7-proxy-tls\") pod \"machine-config-daemon-9jq8t\" (UID: \"902902bc-6dc6-4c5f-8e1b-9399b7c813c7\") " pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.947532 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/902902bc-6dc6-4c5f-8e1b-9399b7c813c7-mcd-auth-proxy-config\") pod \"machine-config-daemon-9jq8t\" (UID: \"902902bc-6dc6-4c5f-8e1b-9399b7c813c7\") " pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.947557 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-run-openvswitch\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.947587 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/035458af-eba0-4241-bcac-4e11d6358b21-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-6zphj\" (UID: \"035458af-eba0-4241-bcac-4e11d6358b21\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-6zphj" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.947610 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-host-run-ovn-kubernetes\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.947637 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.947662 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/357946f5-b5ee-4739-a2c3-62beb5aedb57-host-var-lib-cni-multus\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.947690 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/468a62a3-c55d-40e0-bc1f-d01a979f017a-cnibin\") pod \"multus-additional-cni-plugins-s44qp\" (UID: \"468a62a3-c55d-40e0-bc1f-d01a979f017a\") " pod="openshift-multus/multus-additional-cni-plugins-s44qp" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.947716 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.947753 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/357946f5-b5ee-4739-a2c3-62beb5aedb57-system-cni-dir\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.947780 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/357946f5-b5ee-4739-a2c3-62beb5aedb57-etc-kubernetes\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.947806 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.947828 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-host-slash\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.947848 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-host-cni-netd\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.947870 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/468a62a3-c55d-40e0-bc1f-d01a979f017a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-s44qp\" (UID: \"468a62a3-c55d-40e0-bc1f-d01a979f017a\") " pod="openshift-multus/multus-additional-cni-plugins-s44qp" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.947895 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/468a62a3-c55d-40e0-bc1f-d01a979f017a-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-s44qp\" (UID: \"468a62a3-c55d-40e0-bc1f-d01a979f017a\") " pod="openshift-multus/multus-additional-cni-plugins-s44qp" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.947918 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.947944 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/357946f5-b5ee-4739-a2c3-62beb5aedb57-hostroot\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.947970 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.947996 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/035458af-eba0-4241-bcac-4e11d6358b21-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-6zphj\" (UID: \"035458af-eba0-4241-bcac-4e11d6358b21\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-6zphj" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.948024 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/357946f5-b5ee-4739-a2c3-62beb5aedb57-host-run-k8s-cni-cncf-io\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.948046 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/357946f5-b5ee-4739-a2c3-62beb5aedb57-multus-daemon-config\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.948077 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.947263 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.947304 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.947322 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: E1209 14:57:13.948154 5107 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.948233 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: E1209 14:57:13.948261 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-09 14:57:14.448239864 +0000 UTC m=+82.171944753 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.948497 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.948670 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.947524 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.947621 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.947728 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.947740 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.947959 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.947979 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.948044 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.948809 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-var-lib-openvswitch\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.948863 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-host-cni-bin\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.948886 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b75d4675-9c37-47cf-8fa3-11097aa379ca-ovnkube-config\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.948952 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr2g2\" (UniqueName: \"kubernetes.io/projected/357946f5-b5ee-4739-a2c3-62beb5aedb57-kube-api-access-qr2g2\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.948977 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/468a62a3-c55d-40e0-bc1f-d01a979f017a-cni-binary-copy\") pod \"multus-additional-cni-plugins-s44qp\" (UID: \"468a62a3-c55d-40e0-bc1f-d01a979f017a\") " pod="openshift-multus/multus-additional-cni-plugins-s44qp" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.949019 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6aac14e3-5594-400a-a5f6-f00359244626-hosts-file\") pod \"node-resolver-hk6gf\" (UID: \"6aac14e3-5594-400a-a5f6-f00359244626\") " pod="openshift-dns/node-resolver-hk6gf" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.949099 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-run-ovn\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.949127 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b75d4675-9c37-47cf-8fa3-11097aa379ca-ovn-node-metrics-cert\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.949173 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljp8p\" (UniqueName: \"kubernetes.io/projected/b75d4675-9c37-47cf-8fa3-11097aa379ca-kube-api-access-ljp8p\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.949204 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.949270 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8snpj\" (UniqueName: \"kubernetes.io/projected/902902bc-6dc6-4c5f-8e1b-9399b7c813c7-kube-api-access-8snpj\") pod \"machine-config-daemon-9jq8t\" (UID: \"902902bc-6dc6-4c5f-8e1b-9399b7c813c7\") " pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.949297 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-host-run-netns\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.950469 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.950485 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.951673 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.951687 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.952763 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/357946f5-b5ee-4739-a2c3-62beb5aedb57-multus-cni-dir\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.952820 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/468a62a3-c55d-40e0-bc1f-d01a979f017a-system-cni-dir\") pod \"multus-additional-cni-plugins-s44qp\" (UID: \"468a62a3-c55d-40e0-bc1f-d01a979f017a\") " pod="openshift-multus/multus-additional-cni-plugins-s44qp" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.952913 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/902902bc-6dc6-4c5f-8e1b-9399b7c813c7-rootfs\") pod \"machine-config-daemon-9jq8t\" (UID: \"902902bc-6dc6-4c5f-8e1b-9399b7c813c7\") " pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.952979 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-run-systemd\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.953083 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlxcf\" (UniqueName: \"kubernetes.io/projected/f154303d-e14b-4854-8f94-194d0f338f98-kube-api-access-mlxcf\") pod \"network-metrics-daemon-6xk48\" (UID: \"f154303d-e14b-4854-8f94-194d0f338f98\") " pod="openshift-multus/network-metrics-daemon-6xk48" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.953157 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/357946f5-b5ee-4739-a2c3-62beb5aedb57-cni-binary-copy\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.953208 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.953221 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.953355 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/357946f5-b5ee-4739-a2c3-62beb5aedb57-host-run-netns\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.953384 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/468a62a3-c55d-40e0-bc1f-d01a979f017a-os-release\") pod \"multus-additional-cni-plugins-s44qp\" (UID: \"468a62a3-c55d-40e0-bc1f-d01a979f017a\") " pod="openshift-multus/multus-additional-cni-plugins-s44qp" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.953438 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/035458af-eba0-4241-bcac-4e11d6358b21-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-6zphj\" (UID: \"035458af-eba0-4241-bcac-4e11d6358b21\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-6zphj" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.953469 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-etc-openvswitch\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.953493 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-log-socket\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.953524 5107 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.953768 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.954038 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.954206 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.954248 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.954286 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcnzq\" (UniqueName: \"kubernetes.io/projected/6aac14e3-5594-400a-a5f6-f00359244626-kube-api-access-gcnzq\") pod \"node-resolver-hk6gf\" (UID: \"6aac14e3-5594-400a-a5f6-f00359244626\") " pod="openshift-dns/node-resolver-hk6gf" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.954307 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/468a62a3-c55d-40e0-bc1f-d01a979f017a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-s44qp\" (UID: \"468a62a3-c55d-40e0-bc1f-d01a979f017a\") " pod="openshift-multus/multus-additional-cni-plugins-s44qp" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.954353 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6f91a655-0e59-4855-bb0c-acbc64e10ed7-serviceca\") pod \"node-ca-gfdn8\" (UID: \"6f91a655-0e59-4855-bb0c-acbc64e10ed7\") " pod="openshift-image-registry/node-ca-gfdn8" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.954379 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/357946f5-b5ee-4739-a2c3-62beb5aedb57-multus-socket-dir-parent\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.954406 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/357946f5-b5ee-4739-a2c3-62beb5aedb57-host-var-lib-cni-bin\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.954428 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbbvk\" (UniqueName: \"kubernetes.io/projected/035458af-eba0-4241-bcac-4e11d6358b21-kube-api-access-jbbvk\") pod \"ovnkube-control-plane-57b78d8988-6zphj\" (UID: \"035458af-eba0-4241-bcac-4e11d6358b21\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-6zphj" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.954475 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-node-log\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.954496 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b75d4675-9c37-47cf-8fa3-11097aa379ca-env-overrides\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.954540 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f154303d-e14b-4854-8f94-194d0f338f98-metrics-certs\") pod \"network-metrics-daemon-6xk48\" (UID: \"f154303d-e14b-4854-8f94-194d0f338f98\") " pod="openshift-multus/network-metrics-daemon-6xk48" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.954568 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: E1209 14:57:13.954617 5107 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.954668 5107 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.954679 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.955129 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-host-kubelet\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.955199 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.955472 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/357946f5-b5ee-4739-a2c3-62beb5aedb57-cnibin\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.955602 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/357946f5-b5ee-4739-a2c3-62beb5aedb57-os-release\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.955613 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.955854 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.955859 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.955896 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.955926 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.955939 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:13Z","lastTransitionTime":"2025-12-09T14:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.955736 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/357946f5-b5ee-4739-a2c3-62beb5aedb57-host-var-lib-kubelet\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.955515 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: E1209 14:57:13.955946 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-09 14:57:14.455826129 +0000 UTC m=+82.179531018 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.956382 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6aac14e3-5594-400a-a5f6-f00359244626-tmp-dir\") pod \"node-resolver-hk6gf\" (UID: \"6aac14e3-5594-400a-a5f6-f00359244626\") " pod="openshift-dns/node-resolver-hk6gf" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.956419 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b75d4675-9c37-47cf-8fa3-11097aa379ca-ovnkube-script-lib\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.956450 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.956468 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.956497 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f91a655-0e59-4855-bb0c-acbc64e10ed7-host\") pod \"node-ca-gfdn8\" (UID: \"6f91a655-0e59-4855-bb0c-acbc64e10ed7\") " pod="openshift-image-registry/node-ca-gfdn8" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.956412 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.956873 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq48q\" (UniqueName: \"kubernetes.io/projected/6f91a655-0e59-4855-bb0c-acbc64e10ed7-kube-api-access-sq48q\") pod \"node-ca-gfdn8\" (UID: \"6f91a655-0e59-4855-bb0c-acbc64e10ed7\") " pod="openshift-image-registry/node-ca-gfdn8" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.957012 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/357946f5-b5ee-4739-a2c3-62beb5aedb57-multus-conf-dir\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.957044 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-systemd-units\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.957589 5107 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.957615 5107 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.957631 5107 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.957646 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.957659 5107 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.957672 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.957686 5107 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.957699 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.957713 5107 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.957729 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.957744 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.957757 5107 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.957770 5107 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.957783 5107 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.957796 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.957809 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.957823 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.957837 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.957851 5107 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.957865 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.957879 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.957892 5107 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.957925 5107 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.957939 5107 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.957954 5107 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.957967 5107 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.957980 5107 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.957996 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958007 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958019 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958031 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958042 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958400 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958417 5107 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958427 5107 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958439 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958448 5107 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958457 5107 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958467 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958293 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958362 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958477 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958531 5107 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958547 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958563 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958578 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958592 5107 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958607 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958626 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958642 5107 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958655 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958669 5107 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958682 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958697 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958709 5107 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958723 5107 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958743 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958758 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958772 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958787 5107 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958801 5107 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958814 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958827 5107 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958840 5107 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958854 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958867 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958879 5107 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958892 5107 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958904 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958917 5107 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958929 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958941 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958956 5107 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958968 5107 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958981 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.958993 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959005 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959019 5107 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959031 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959044 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959056 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959068 5107 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959081 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959092 5107 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959109 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959121 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959133 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959149 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959164 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959178 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959191 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959206 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959219 5107 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959230 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959243 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959256 5107 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959269 5107 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959282 5107 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959294 5107 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959308 5107 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959322 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959350 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959363 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959422 5107 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959435 5107 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959448 5107 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959461 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959462 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959473 5107 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959522 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959536 5107 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959550 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959563 5107 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959573 5107 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959587 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959602 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959613 5107 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959628 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959640 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959652 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959662 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959674 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959685 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959699 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959710 5107 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959720 5107 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959730 5107 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959740 5107 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959750 5107 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959761 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959770 5107 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959781 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959792 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959803 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959812 5107 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959823 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959835 5107 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959846 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959857 5107 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959868 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959879 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959890 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959900 5107 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959910 5107 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959921 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959931 5107 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959943 5107 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959952 5107 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959962 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959974 5107 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959984 5107 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.959993 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.960006 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.960015 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.960024 5107 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.960038 5107 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.960047 5107 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.960056 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.960066 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.960076 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.960085 5107 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.960095 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.960105 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.960115 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.960125 5107 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.960136 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.960176 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:13 crc kubenswrapper[5107]: E1209 14:57:13.961055 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:57:13 crc kubenswrapper[5107]: E1209 14:57:13.961070 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:57:13 crc kubenswrapper[5107]: E1209 14:57:13.961083 5107 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:57:13 crc kubenswrapper[5107]: E1209 14:57:13.961166 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-09 14:57:14.461149422 +0000 UTC m=+82.184854311 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.961478 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 09 14:57:13 crc kubenswrapper[5107]: E1209 14:57:13.962146 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:57:13 crc kubenswrapper[5107]: E1209 14:57:13.962161 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:57:13 crc kubenswrapper[5107]: E1209 14:57:13.962170 5107 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:57:13 crc kubenswrapper[5107]: E1209 14:57:13.962205 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-09 14:57:14.462196891 +0000 UTC m=+82.185901780 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.963296 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.963982 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.964089 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.964888 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.964970 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.965651 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.966613 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.967432 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.969045 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.970647 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.971419 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.971586 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.975549 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.977218 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.977857 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.977939 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.977964 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.978762 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.981515 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.983392 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.983622 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.983791 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.984468 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.984591 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.984750 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.984755 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.985214 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.985276 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.985355 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.985756 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.986044 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6xk48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f154303d-e14b-4854-8f94-194d0f338f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlxcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlxcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6xk48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.986111 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.986114 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.986151 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.986216 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.986256 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.986296 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.986543 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.986748 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.986822 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.986943 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.986983 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.987425 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.987507 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.987534 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.987771 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.987868 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.987875 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.988137 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.988156 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.988181 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.989362 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.989690 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.989724 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.989743 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.991149 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.991286 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:57:13 crc kubenswrapper[5107]: I1209 14:57:13.993077 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.005721 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"902902bc-6dc6-4c5f-8e1b-9399b7c813c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9jq8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.016024 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.016846 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-hk6gf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aac14e3-5594-400a-a5f6-f00359244626\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gcnzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hk6gf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.020772 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.026054 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.027029 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gfdn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f91a655-0e59-4855-bb0c-acbc64e10ed7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq48q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gfdn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.036281 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17d90584-600e-4a2c-858d-ac2100e604ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://90d46289c8ca432b46932951c07a149bbafcc7176ad9849b227f5651b06a547e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://082babd1585ebe1400244484d45e6ad956bbf945dd0d12b138be216b4fba3b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://082babd1585ebe1400244484d45e6ad956bbf945dd0d12b138be216b4fba3b8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.046659 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6efe267-4ba8-4dba-82ff-0482a7faa4cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://904e724482192717fecf2153d5eb92a8c15cff7f211c05c086e0078ea27b2c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b2ee19d9b0c3ece70ee343d7e6d4270979d47c693eb4259e6765fa9cad9e1da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9deb047f092d62f6fbf741f0935ccfd2452b426c60e8583ed86fb57f9cd9c940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1acf3dcf34f39f83e79fea39ae6c0addc25edf7d233ff3106e740d1c52c1a5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acf3dcf34f39f83e79fea39ae6c0addc25edf7d233ff3106e740d1c52c1a5a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.049839 5107 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.049986 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.050940 5107 scope.go:117] "RemoveContainer" containerID="dcfe82d78bf385817437f96e0cd90ec7f0e152520e5d2289ea61ded6ec88972e" Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.051165 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.056907 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.058055 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.058086 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.058107 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.058127 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.058137 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:14Z","lastTransitionTime":"2025-12-09T14:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.060621 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b75d4675-9c37-47cf-8fa3-11097aa379ca-ovn-node-metrics-cert\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.060647 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ljp8p\" (UniqueName: \"kubernetes.io/projected/b75d4675-9c37-47cf-8fa3-11097aa379ca-kube-api-access-ljp8p\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.060664 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.060682 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8snpj\" (UniqueName: \"kubernetes.io/projected/902902bc-6dc6-4c5f-8e1b-9399b7c813c7-kube-api-access-8snpj\") pod \"machine-config-daemon-9jq8t\" (UID: \"902902bc-6dc6-4c5f-8e1b-9399b7c813c7\") " pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.060698 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-host-run-netns\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.060740 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-host-run-netns\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.060773 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.061286 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/357946f5-b5ee-4739-a2c3-62beb5aedb57-multus-cni-dir\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.061320 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/468a62a3-c55d-40e0-bc1f-d01a979f017a-system-cni-dir\") pod \"multus-additional-cni-plugins-s44qp\" (UID: \"468a62a3-c55d-40e0-bc1f-d01a979f017a\") " pod="openshift-multus/multus-additional-cni-plugins-s44qp" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.061359 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/902902bc-6dc6-4c5f-8e1b-9399b7c813c7-rootfs\") pod \"machine-config-daemon-9jq8t\" (UID: \"902902bc-6dc6-4c5f-8e1b-9399b7c813c7\") " pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.061381 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-run-systemd\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.061404 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mlxcf\" (UniqueName: \"kubernetes.io/projected/f154303d-e14b-4854-8f94-194d0f338f98-kube-api-access-mlxcf\") pod \"network-metrics-daemon-6xk48\" (UID: \"f154303d-e14b-4854-8f94-194d0f338f98\") " pod="openshift-multus/network-metrics-daemon-6xk48" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.061465 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-run-systemd\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.061511 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/357946f5-b5ee-4739-a2c3-62beb5aedb57-cni-binary-copy\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.061519 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/902902bc-6dc6-4c5f-8e1b-9399b7c813c7-rootfs\") pod \"machine-config-daemon-9jq8t\" (UID: \"902902bc-6dc6-4c5f-8e1b-9399b7c813c7\") " pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.061534 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/468a62a3-c55d-40e0-bc1f-d01a979f017a-system-cni-dir\") pod \"multus-additional-cni-plugins-s44qp\" (UID: \"468a62a3-c55d-40e0-bc1f-d01a979f017a\") " pod="openshift-multus/multus-additional-cni-plugins-s44qp" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.061536 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/357946f5-b5ee-4739-a2c3-62beb5aedb57-host-run-netns\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.061614 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/468a62a3-c55d-40e0-bc1f-d01a979f017a-os-release\") pod \"multus-additional-cni-plugins-s44qp\" (UID: \"468a62a3-c55d-40e0-bc1f-d01a979f017a\") " pod="openshift-multus/multus-additional-cni-plugins-s44qp" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.061651 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/035458af-eba0-4241-bcac-4e11d6358b21-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-6zphj\" (UID: \"035458af-eba0-4241-bcac-4e11d6358b21\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-6zphj" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.061674 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-etc-openvswitch\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.061695 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-log-socket\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.061728 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gcnzq\" (UniqueName: \"kubernetes.io/projected/6aac14e3-5594-400a-a5f6-f00359244626-kube-api-access-gcnzq\") pod \"node-resolver-hk6gf\" (UID: \"6aac14e3-5594-400a-a5f6-f00359244626\") " pod="openshift-dns/node-resolver-hk6gf" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.061750 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/468a62a3-c55d-40e0-bc1f-d01a979f017a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-s44qp\" (UID: \"468a62a3-c55d-40e0-bc1f-d01a979f017a\") " pod="openshift-multus/multus-additional-cni-plugins-s44qp" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.061772 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6f91a655-0e59-4855-bb0c-acbc64e10ed7-serviceca\") pod \"node-ca-gfdn8\" (UID: \"6f91a655-0e59-4855-bb0c-acbc64e10ed7\") " pod="openshift-image-registry/node-ca-gfdn8" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.061792 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/357946f5-b5ee-4739-a2c3-62beb5aedb57-multus-socket-dir-parent\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.061813 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/357946f5-b5ee-4739-a2c3-62beb5aedb57-host-var-lib-cni-bin\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.061834 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jbbvk\" (UniqueName: \"kubernetes.io/projected/035458af-eba0-4241-bcac-4e11d6358b21-kube-api-access-jbbvk\") pod \"ovnkube-control-plane-57b78d8988-6zphj\" (UID: \"035458af-eba0-4241-bcac-4e11d6358b21\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-6zphj" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.061831 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/357946f5-b5ee-4739-a2c3-62beb5aedb57-multus-cni-dir\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.062055 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/468a62a3-c55d-40e0-bc1f-d01a979f017a-os-release\") pod \"multus-additional-cni-plugins-s44qp\" (UID: \"468a62a3-c55d-40e0-bc1f-d01a979f017a\") " pod="openshift-multus/multus-additional-cni-plugins-s44qp" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.062487 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/357946f5-b5ee-4739-a2c3-62beb5aedb57-multus-socket-dir-parent\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.062525 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/357946f5-b5ee-4739-a2c3-62beb5aedb57-cni-binary-copy\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.062560 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/468a62a3-c55d-40e0-bc1f-d01a979f017a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-s44qp\" (UID: \"468a62a3-c55d-40e0-bc1f-d01a979f017a\") " pod="openshift-multus/multus-additional-cni-plugins-s44qp" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.062605 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/357946f5-b5ee-4739-a2c3-62beb5aedb57-host-var-lib-cni-bin\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.062656 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-log-socket\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.062692 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-etc-openvswitch\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.062727 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-node-log\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.062758 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b75d4675-9c37-47cf-8fa3-11097aa379ca-env-overrides\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.062789 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-node-log\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.062809 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f154303d-e14b-4854-8f94-194d0f338f98-metrics-certs\") pod \"network-metrics-daemon-6xk48\" (UID: \"f154303d-e14b-4854-8f94-194d0f338f98\") " pod="openshift-multus/network-metrics-daemon-6xk48" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.062845 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-host-kubelet\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.062873 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/357946f5-b5ee-4739-a2c3-62beb5aedb57-cnibin\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.062893 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/357946f5-b5ee-4739-a2c3-62beb5aedb57-os-release\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.062914 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/357946f5-b5ee-4739-a2c3-62beb5aedb57-host-var-lib-kubelet\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.062936 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6aac14e3-5594-400a-a5f6-f00359244626-tmp-dir\") pod \"node-resolver-hk6gf\" (UID: \"6aac14e3-5594-400a-a5f6-f00359244626\") " pod="openshift-dns/node-resolver-hk6gf" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.062957 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b75d4675-9c37-47cf-8fa3-11097aa379ca-ovnkube-script-lib\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.062986 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f91a655-0e59-4855-bb0c-acbc64e10ed7-host\") pod \"node-ca-gfdn8\" (UID: \"6f91a655-0e59-4855-bb0c-acbc64e10ed7\") " pod="openshift-image-registry/node-ca-gfdn8" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.063017 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sq48q\" (UniqueName: \"kubernetes.io/projected/6f91a655-0e59-4855-bb0c-acbc64e10ed7-kube-api-access-sq48q\") pod \"node-ca-gfdn8\" (UID: \"6f91a655-0e59-4855-bb0c-acbc64e10ed7\") " pod="openshift-image-registry/node-ca-gfdn8" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.063039 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/357946f5-b5ee-4739-a2c3-62beb5aedb57-multus-conf-dir\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.063061 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-systemd-units\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.063092 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/357946f5-b5ee-4739-a2c3-62beb5aedb57-host-run-multus-certs\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.063119 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4hsll\" (UniqueName: \"kubernetes.io/projected/468a62a3-c55d-40e0-bc1f-d01a979f017a-kube-api-access-4hsll\") pod \"multus-additional-cni-plugins-s44qp\" (UID: \"468a62a3-c55d-40e0-bc1f-d01a979f017a\") " pod="openshift-multus/multus-additional-cni-plugins-s44qp" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.063145 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/902902bc-6dc6-4c5f-8e1b-9399b7c813c7-proxy-tls\") pod \"machine-config-daemon-9jq8t\" (UID: \"902902bc-6dc6-4c5f-8e1b-9399b7c813c7\") " pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.063173 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/902902bc-6dc6-4c5f-8e1b-9399b7c813c7-mcd-auth-proxy-config\") pod \"machine-config-daemon-9jq8t\" (UID: \"902902bc-6dc6-4c5f-8e1b-9399b7c813c7\") " pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.063206 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-run-openvswitch\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.063231 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/035458af-eba0-4241-bcac-4e11d6358b21-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-6zphj\" (UID: \"035458af-eba0-4241-bcac-4e11d6358b21\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-6zphj" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.063250 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-host-run-ovn-kubernetes\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.063287 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/357946f5-b5ee-4739-a2c3-62beb5aedb57-host-var-lib-cni-multus\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.063311 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/468a62a3-c55d-40e0-bc1f-d01a979f017a-cnibin\") pod \"multus-additional-cni-plugins-s44qp\" (UID: \"468a62a3-c55d-40e0-bc1f-d01a979f017a\") " pod="openshift-multus/multus-additional-cni-plugins-s44qp" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.063323 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/357946f5-b5ee-4739-a2c3-62beb5aedb57-multus-conf-dir\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.063356 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.063386 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/357946f5-b5ee-4739-a2c3-62beb5aedb57-system-cni-dir\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.063406 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/357946f5-b5ee-4739-a2c3-62beb5aedb57-etc-kubernetes\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.063424 5107 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.063441 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-host-slash\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.063467 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-host-cni-netd\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.063484 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-host-run-ovn-kubernetes\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.063500 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-host-cni-netd\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.063531 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/357946f5-b5ee-4739-a2c3-62beb5aedb57-host-run-multus-certs\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.063287 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b75d4675-9c37-47cf-8fa3-11097aa379ca-env-overrides\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.063577 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/468a62a3-c55d-40e0-bc1f-d01a979f017a-cnibin\") pod \"multus-additional-cni-plugins-s44qp\" (UID: \"468a62a3-c55d-40e0-bc1f-d01a979f017a\") " pod="openshift-multus/multus-additional-cni-plugins-s44qp" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.063605 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.063636 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/357946f5-b5ee-4739-a2c3-62beb5aedb57-etc-kubernetes\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.063639 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-host-kubelet\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.063443 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6f91a655-0e59-4855-bb0c-acbc64e10ed7-serviceca\") pod \"node-ca-gfdn8\" (UID: \"6f91a655-0e59-4855-bb0c-acbc64e10ed7\") " pod="openshift-image-registry/node-ca-gfdn8" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.063679 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-host-slash\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.063491 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f154303d-e14b-4854-8f94-194d0f338f98-metrics-certs podName:f154303d-e14b-4854-8f94-194d0f338f98 nodeName:}" failed. No retries permitted until 2025-12-09 14:57:14.563474258 +0000 UTC m=+82.287179247 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f154303d-e14b-4854-8f94-194d0f338f98-metrics-certs") pod "network-metrics-daemon-6xk48" (UID: "f154303d-e14b-4854-8f94-194d0f338f98") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.063695 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/357946f5-b5ee-4739-a2c3-62beb5aedb57-cnibin\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.063719 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-systemd-units\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.063753 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/357946f5-b5ee-4739-a2c3-62beb5aedb57-os-release\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.063796 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/468a62a3-c55d-40e0-bc1f-d01a979f017a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-s44qp\" (UID: \"468a62a3-c55d-40e0-bc1f-d01a979f017a\") " pod="openshift-multus/multus-additional-cni-plugins-s44qp" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.063815 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/357946f5-b5ee-4739-a2c3-62beb5aedb57-host-var-lib-kubelet\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.063822 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/468a62a3-c55d-40e0-bc1f-d01a979f017a-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-s44qp\" (UID: \"468a62a3-c55d-40e0-bc1f-d01a979f017a\") " pod="openshift-multus/multus-additional-cni-plugins-s44qp" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.063848 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.063870 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/357946f5-b5ee-4739-a2c3-62beb5aedb57-hostroot\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.063893 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/035458af-eba0-4241-bcac-4e11d6358b21-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-6zphj\" (UID: \"035458af-eba0-4241-bcac-4e11d6358b21\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-6zphj" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.063923 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/357946f5-b5ee-4739-a2c3-62beb5aedb57-host-run-k8s-cni-cncf-io\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.063943 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/357946f5-b5ee-4739-a2c3-62beb5aedb57-multus-daemon-config\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.063974 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-var-lib-openvswitch\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.063999 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-host-cni-bin\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.064022 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b75d4675-9c37-47cf-8fa3-11097aa379ca-ovnkube-config\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.064043 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qr2g2\" (UniqueName: \"kubernetes.io/projected/357946f5-b5ee-4739-a2c3-62beb5aedb57-kube-api-access-qr2g2\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.064084 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6aac14e3-5594-400a-a5f6-f00359244626-tmp-dir\") pod \"node-resolver-hk6gf\" (UID: \"6aac14e3-5594-400a-a5f6-f00359244626\") " pod="openshift-dns/node-resolver-hk6gf" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.064417 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-run-openvswitch\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.064456 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/357946f5-b5ee-4739-a2c3-62beb5aedb57-hostroot\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.064504 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/357946f5-b5ee-4739-a2c3-62beb5aedb57-system-cni-dir\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.063799 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f91a655-0e59-4855-bb0c-acbc64e10ed7-host\") pod \"node-ca-gfdn8\" (UID: \"6f91a655-0e59-4855-bb0c-acbc64e10ed7\") " pod="openshift-image-registry/node-ca-gfdn8" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.064918 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/468a62a3-c55d-40e0-bc1f-d01a979f017a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-s44qp\" (UID: \"468a62a3-c55d-40e0-bc1f-d01a979f017a\") " pod="openshift-multus/multus-additional-cni-plugins-s44qp" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.064990 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.065036 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-var-lib-openvswitch\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.065630 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/468a62a3-c55d-40e0-bc1f-d01a979f017a-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-s44qp\" (UID: \"468a62a3-c55d-40e0-bc1f-d01a979f017a\") " pod="openshift-multus/multus-additional-cni-plugins-s44qp" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.065656 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/357946f5-b5ee-4739-a2c3-62beb5aedb57-host-var-lib-cni-multus\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.065712 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-host-cni-bin\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.064063 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/468a62a3-c55d-40e0-bc1f-d01a979f017a-cni-binary-copy\") pod \"multus-additional-cni-plugins-s44qp\" (UID: \"468a62a3-c55d-40e0-bc1f-d01a979f017a\") " pod="openshift-multus/multus-additional-cni-plugins-s44qp" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.065824 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6aac14e3-5594-400a-a5f6-f00359244626-hosts-file\") pod \"node-resolver-hk6gf\" (UID: \"6aac14e3-5594-400a-a5f6-f00359244626\") " pod="openshift-dns/node-resolver-hk6gf" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.065855 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-run-ovn\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.065879 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/357946f5-b5ee-4739-a2c3-62beb5aedb57-host-run-k8s-cni-cncf-io\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.066050 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-run-ovn\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.066267 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b75d4675-9c37-47cf-8fa3-11097aa379ca-ovnkube-script-lib\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.066774 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.066828 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6aac14e3-5594-400a-a5f6-f00359244626-hosts-file\") pod \"node-resolver-hk6gf\" (UID: \"6aac14e3-5594-400a-a5f6-f00359244626\") " pod="openshift-dns/node-resolver-hk6gf" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.066854 5107 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.066825 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.066871 5107 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.066991 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067007 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067021 5107 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067034 5107 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067048 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067059 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067072 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067087 5107 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067098 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067113 5107 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067126 5107 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067138 5107 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067151 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067165 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067177 5107 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067190 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067203 5107 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067215 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067227 5107 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067240 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067252 5107 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067263 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067275 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067287 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067299 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067310 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067322 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067358 5107 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067371 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067383 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067395 5107 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067407 5107 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067423 5107 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067449 5107 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067461 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067472 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067485 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067496 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067508 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067520 5107 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067532 5107 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067543 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067556 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067568 5107 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067579 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067590 5107 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067601 5107 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067613 5107 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067627 5107 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067639 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067650 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067662 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067675 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067690 5107 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067703 5107 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067716 5107 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.067805 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/357946f5-b5ee-4739-a2c3-62beb5aedb57-host-run-netns\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.068486 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/035458af-eba0-4241-bcac-4e11d6358b21-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-6zphj\" (UID: \"035458af-eba0-4241-bcac-4e11d6358b21\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-6zphj" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.068664 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/357946f5-b5ee-4739-a2c3-62beb5aedb57-multus-daemon-config\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.068919 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/035458af-eba0-4241-bcac-4e11d6358b21-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-6zphj\" (UID: \"035458af-eba0-4241-bcac-4e11d6358b21\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-6zphj" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.069061 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b75d4675-9c37-47cf-8fa3-11097aa379ca-ovn-node-metrics-cert\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.069432 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/035458af-eba0-4241-bcac-4e11d6358b21-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-6zphj\" (UID: \"035458af-eba0-4241-bcac-4e11d6358b21\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-6zphj" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.070489 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b75d4675-9c37-47cf-8fa3-11097aa379ca-ovnkube-config\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.070521 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/468a62a3-c55d-40e0-bc1f-d01a979f017a-cni-binary-copy\") pod \"multus-additional-cni-plugins-s44qp\" (UID: \"468a62a3-c55d-40e0-bc1f-d01a979f017a\") " pod="openshift-multus/multus-additional-cni-plugins-s44qp" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.073004 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/902902bc-6dc6-4c5f-8e1b-9399b7c813c7-proxy-tls\") pod \"machine-config-daemon-9jq8t\" (UID: \"902902bc-6dc6-4c5f-8e1b-9399b7c813c7\") " pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.073759 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/902902bc-6dc6-4c5f-8e1b-9399b7c813c7-mcd-auth-proxy-config\") pod \"machine-config-daemon-9jq8t\" (UID: \"902902bc-6dc6-4c5f-8e1b-9399b7c813c7\") " pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.078494 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8snpj\" (UniqueName: \"kubernetes.io/projected/902902bc-6dc6-4c5f-8e1b-9399b7c813c7-kube-api-access-8snpj\") pod \"machine-config-daemon-9jq8t\" (UID: \"902902bc-6dc6-4c5f-8e1b-9399b7c813c7\") " pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.079988 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljp8p\" (UniqueName: \"kubernetes.io/projected/b75d4675-9c37-47cf-8fa3-11097aa379ca-kube-api-access-ljp8p\") pod \"ovnkube-node-9rjcr\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.081350 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hsll\" (UniqueName: \"kubernetes.io/projected/468a62a3-c55d-40e0-bc1f-d01a979f017a-kube-api-access-4hsll\") pod \"multus-additional-cni-plugins-s44qp\" (UID: \"468a62a3-c55d-40e0-bc1f-d01a979f017a\") " pod="openshift-multus/multus-additional-cni-plugins-s44qp" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.083708 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b75d4675-9c37-47cf-8fa3-11097aa379ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9rjcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.084302 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq48q\" (UniqueName: \"kubernetes.io/projected/6f91a655-0e59-4855-bb0c-acbc64e10ed7-kube-api-access-sq48q\") pod \"node-ca-gfdn8\" (UID: \"6f91a655-0e59-4855-bb0c-acbc64e10ed7\") " pod="openshift-image-registry/node-ca-gfdn8" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.088076 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlxcf\" (UniqueName: \"kubernetes.io/projected/f154303d-e14b-4854-8f94-194d0f338f98-kube-api-access-mlxcf\") pod \"network-metrics-daemon-6xk48\" (UID: \"f154303d-e14b-4854-8f94-194d0f338f98\") " pod="openshift-multus/network-metrics-daemon-6xk48" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.089807 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qr2g2\" (UniqueName: \"kubernetes.io/projected/357946f5-b5ee-4739-a2c3-62beb5aedb57-kube-api-access-qr2g2\") pod \"multus-g7sv4\" (UID: \"357946f5-b5ee-4739-a2c3-62beb5aedb57\") " pod="openshift-multus/multus-g7sv4" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.090326 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcnzq\" (UniqueName: \"kubernetes.io/projected/6aac14e3-5594-400a-a5f6-f00359244626-kube-api-access-gcnzq\") pod \"node-resolver-hk6gf\" (UID: \"6aac14e3-5594-400a-a5f6-f00359244626\") " pod="openshift-dns/node-resolver-hk6gf" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.091035 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbbvk\" (UniqueName: \"kubernetes.io/projected/035458af-eba0-4241-bcac-4e11d6358b21-kube-api-access-jbbvk\") pod \"ovnkube-control-plane-57b78d8988-6zphj\" (UID: \"035458af-eba0-4241-bcac-4e11d6358b21\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-6zphj" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.091597 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.101709 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 09 14:57:14 crc kubenswrapper[5107]: W1209 14:57:14.102963 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34177974_8d82_49d2_a763_391d0df3bbd8.slice/crio-8570d620f9195d3b0eddb92b5f62deabce9032ecde7bbcef19a4037652b676bc WatchSource:0}: Error finding container 8570d620f9195d3b0eddb92b5f62deabce9032ecde7bbcef19a4037652b676bc: Status 404 returned error can't find the container with id 8570d620f9195d3b0eddb92b5f62deabce9032ecde7bbcef19a4037652b676bc Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.106353 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:57:14 crc kubenswrapper[5107]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Dec 09 14:57:14 crc kubenswrapper[5107]: set -o allexport Dec 09 14:57:14 crc kubenswrapper[5107]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Dec 09 14:57:14 crc kubenswrapper[5107]: source /etc/kubernetes/apiserver-url.env Dec 09 14:57:14 crc kubenswrapper[5107]: else Dec 09 14:57:14 crc kubenswrapper[5107]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Dec 09 14:57:14 crc kubenswrapper[5107]: exit 1 Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Dec 09 14:57:14 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:57:14 crc kubenswrapper[5107]: > logger="UnhandledError" Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.107544 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.110382 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 09 14:57:14 crc kubenswrapper[5107]: W1209 14:57:14.110955 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc4541ce_7789_4670_bc75_5c2868e52ce0.slice/crio-ba7a3c95442dc472c129bd4a37613cef1e88004139dc95e095fd622e4f557cfc WatchSource:0}: Error finding container ba7a3c95442dc472c129bd4a37613cef1e88004139dc95e095fd622e4f557cfc: Status 404 returned error can't find the container with id ba7a3c95442dc472c129bd4a37613cef1e88004139dc95e095fd622e4f557cfc Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.116410 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:57:14 crc kubenswrapper[5107]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 09 14:57:14 crc kubenswrapper[5107]: if [[ -f "/env/_master" ]]; then Dec 09 14:57:14 crc kubenswrapper[5107]: set -o allexport Dec 09 14:57:14 crc kubenswrapper[5107]: source "/env/_master" Dec 09 14:57:14 crc kubenswrapper[5107]: set +o allexport Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Dec 09 14:57:14 crc kubenswrapper[5107]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Dec 09 14:57:14 crc kubenswrapper[5107]: ho_enable="--enable-hybrid-overlay" Dec 09 14:57:14 crc kubenswrapper[5107]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Dec 09 14:57:14 crc kubenswrapper[5107]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Dec 09 14:57:14 crc kubenswrapper[5107]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Dec 09 14:57:14 crc kubenswrapper[5107]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 09 14:57:14 crc kubenswrapper[5107]: --webhook-cert-dir="/etc/webhook-cert" \ Dec 09 14:57:14 crc kubenswrapper[5107]: --webhook-host=127.0.0.1 \ Dec 09 14:57:14 crc kubenswrapper[5107]: --webhook-port=9743 \ Dec 09 14:57:14 crc kubenswrapper[5107]: ${ho_enable} \ Dec 09 14:57:14 crc kubenswrapper[5107]: --enable-interconnect \ Dec 09 14:57:14 crc kubenswrapper[5107]: --disable-approver \ Dec 09 14:57:14 crc kubenswrapper[5107]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Dec 09 14:57:14 crc kubenswrapper[5107]: --wait-for-kubernetes-api=200s \ Dec 09 14:57:14 crc kubenswrapper[5107]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Dec 09 14:57:14 crc kubenswrapper[5107]: --loglevel="${LOGLEVEL}" Dec 09 14:57:14 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:57:14 crc kubenswrapper[5107]: > logger="UnhandledError" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.117900 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-hk6gf" Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.118913 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:57:14 crc kubenswrapper[5107]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 09 14:57:14 crc kubenswrapper[5107]: if [[ -f "/env/_master" ]]; then Dec 09 14:57:14 crc kubenswrapper[5107]: set -o allexport Dec 09 14:57:14 crc kubenswrapper[5107]: source "/env/_master" Dec 09 14:57:14 crc kubenswrapper[5107]: set +o allexport Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: Dec 09 14:57:14 crc kubenswrapper[5107]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Dec 09 14:57:14 crc kubenswrapper[5107]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 09 14:57:14 crc kubenswrapper[5107]: --disable-webhook \ Dec 09 14:57:14 crc kubenswrapper[5107]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Dec 09 14:57:14 crc kubenswrapper[5107]: --loglevel="${LOGLEVEL}" Dec 09 14:57:14 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:57:14 crc kubenswrapper[5107]: > logger="UnhandledError" Dec 09 14:57:14 crc kubenswrapper[5107]: W1209 14:57:14.119488 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod428b39f5_eb1c_4f65_b7a4_eeb6e84860cc.slice/crio-96f951899650bdb4307344b8ff3f0a442e803e61e50bc6013663ccd82731f75e WatchSource:0}: Error finding container 96f951899650bdb4307344b8ff3f0a442e803e61e50bc6013663ccd82731f75e: Status 404 returned error can't find the container with id 96f951899650bdb4307344b8ff3f0a442e803e61e50bc6013663ccd82731f75e Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.119982 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.123293 5107 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.124556 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Dec 09 14:57:14 crc kubenswrapper[5107]: W1209 14:57:14.130323 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6aac14e3_5594_400a_a5f6_f00359244626.slice/crio-b25751f36257ffde5ce7f8aadf5972d55ffb49f4ecf63e4b20a8a4c1556d26cb WatchSource:0}: Error finding container b25751f36257ffde5ce7f8aadf5972d55ffb49f4ecf63e4b20a8a4c1556d26cb: Status 404 returned error can't find the container with id b25751f36257ffde5ce7f8aadf5972d55ffb49f4ecf63e4b20a8a4c1556d26cb Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.132204 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:57:14 crc kubenswrapper[5107]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Dec 09 14:57:14 crc kubenswrapper[5107]: set -uo pipefail Dec 09 14:57:14 crc kubenswrapper[5107]: Dec 09 14:57:14 crc kubenswrapper[5107]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Dec 09 14:57:14 crc kubenswrapper[5107]: Dec 09 14:57:14 crc kubenswrapper[5107]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Dec 09 14:57:14 crc kubenswrapper[5107]: HOSTS_FILE="/etc/hosts" Dec 09 14:57:14 crc kubenswrapper[5107]: TEMP_FILE="/tmp/hosts.tmp" Dec 09 14:57:14 crc kubenswrapper[5107]: Dec 09 14:57:14 crc kubenswrapper[5107]: IFS=', ' read -r -a services <<< "${SERVICES}" Dec 09 14:57:14 crc kubenswrapper[5107]: Dec 09 14:57:14 crc kubenswrapper[5107]: # Make a temporary file with the old hosts file's attributes. Dec 09 14:57:14 crc kubenswrapper[5107]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Dec 09 14:57:14 crc kubenswrapper[5107]: echo "Failed to preserve hosts file. Exiting." Dec 09 14:57:14 crc kubenswrapper[5107]: exit 1 Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: Dec 09 14:57:14 crc kubenswrapper[5107]: while true; do Dec 09 14:57:14 crc kubenswrapper[5107]: declare -A svc_ips Dec 09 14:57:14 crc kubenswrapper[5107]: for svc in "${services[@]}"; do Dec 09 14:57:14 crc kubenswrapper[5107]: # Fetch service IP from cluster dns if present. We make several tries Dec 09 14:57:14 crc kubenswrapper[5107]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Dec 09 14:57:14 crc kubenswrapper[5107]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Dec 09 14:57:14 crc kubenswrapper[5107]: # support UDP loadbalancers and require reaching DNS through TCP. Dec 09 14:57:14 crc kubenswrapper[5107]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 09 14:57:14 crc kubenswrapper[5107]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 09 14:57:14 crc kubenswrapper[5107]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 09 14:57:14 crc kubenswrapper[5107]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Dec 09 14:57:14 crc kubenswrapper[5107]: for i in ${!cmds[*]} Dec 09 14:57:14 crc kubenswrapper[5107]: do Dec 09 14:57:14 crc kubenswrapper[5107]: ips=($(eval "${cmds[i]}")) Dec 09 14:57:14 crc kubenswrapper[5107]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Dec 09 14:57:14 crc kubenswrapper[5107]: svc_ips["${svc}"]="${ips[@]}" Dec 09 14:57:14 crc kubenswrapper[5107]: break Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: done Dec 09 14:57:14 crc kubenswrapper[5107]: done Dec 09 14:57:14 crc kubenswrapper[5107]: Dec 09 14:57:14 crc kubenswrapper[5107]: # Update /etc/hosts only if we get valid service IPs Dec 09 14:57:14 crc kubenswrapper[5107]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Dec 09 14:57:14 crc kubenswrapper[5107]: # Stale entries could exist in /etc/hosts if the service is deleted Dec 09 14:57:14 crc kubenswrapper[5107]: if [[ -n "${svc_ips[*]-}" ]]; then Dec 09 14:57:14 crc kubenswrapper[5107]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Dec 09 14:57:14 crc kubenswrapper[5107]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Dec 09 14:57:14 crc kubenswrapper[5107]: # Only continue rebuilding the hosts entries if its original content is preserved Dec 09 14:57:14 crc kubenswrapper[5107]: sleep 60 & wait Dec 09 14:57:14 crc kubenswrapper[5107]: continue Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: Dec 09 14:57:14 crc kubenswrapper[5107]: # Append resolver entries for services Dec 09 14:57:14 crc kubenswrapper[5107]: rc=0 Dec 09 14:57:14 crc kubenswrapper[5107]: for svc in "${!svc_ips[@]}"; do Dec 09 14:57:14 crc kubenswrapper[5107]: for ip in ${svc_ips[${svc}]}; do Dec 09 14:57:14 crc kubenswrapper[5107]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Dec 09 14:57:14 crc kubenswrapper[5107]: done Dec 09 14:57:14 crc kubenswrapper[5107]: done Dec 09 14:57:14 crc kubenswrapper[5107]: if [[ $rc -ne 0 ]]; then Dec 09 14:57:14 crc kubenswrapper[5107]: sleep 60 & wait Dec 09 14:57:14 crc kubenswrapper[5107]: continue Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: Dec 09 14:57:14 crc kubenswrapper[5107]: Dec 09 14:57:14 crc kubenswrapper[5107]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Dec 09 14:57:14 crc kubenswrapper[5107]: # Replace /etc/hosts with our modified version if needed Dec 09 14:57:14 crc kubenswrapper[5107]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Dec 09 14:57:14 crc kubenswrapper[5107]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: sleep 60 & wait Dec 09 14:57:14 crc kubenswrapper[5107]: unset svc_ips Dec 09 14:57:14 crc kubenswrapper[5107]: done Dec 09 14:57:14 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gcnzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-hk6gf_openshift-dns(6aac14e3-5594-400a-a5f6-f00359244626): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:57:14 crc kubenswrapper[5107]: > logger="UnhandledError" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.133917 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-gfdn8" Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.133907 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-hk6gf" podUID="6aac14e3-5594-400a-a5f6-f00359244626" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.141026 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-6zphj" Dec 09 14:57:14 crc kubenswrapper[5107]: W1209 14:57:14.145421 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f91a655_0e59_4855_bb0c_acbc64e10ed7.slice/crio-c8a2b125bb40a257eadb447bff03bd6d2d9491c68146c3ee8f8b6e51c803d919 WatchSource:0}: Error finding container c8a2b125bb40a257eadb447bff03bd6d2d9491c68146c3ee8f8b6e51c803d919: Status 404 returned error can't find the container with id c8a2b125bb40a257eadb447bff03bd6d2d9491c68146c3ee8f8b6e51c803d919 Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.149924 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:57:14 crc kubenswrapper[5107]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Dec 09 14:57:14 crc kubenswrapper[5107]: while [ true ]; Dec 09 14:57:14 crc kubenswrapper[5107]: do Dec 09 14:57:14 crc kubenswrapper[5107]: for f in $(ls /tmp/serviceca); do Dec 09 14:57:14 crc kubenswrapper[5107]: echo $f Dec 09 14:57:14 crc kubenswrapper[5107]: ca_file_path="/tmp/serviceca/${f}" Dec 09 14:57:14 crc kubenswrapper[5107]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Dec 09 14:57:14 crc kubenswrapper[5107]: reg_dir_path="/etc/docker/certs.d/${f}" Dec 09 14:57:14 crc kubenswrapper[5107]: if [ -e "${reg_dir_path}" ]; then Dec 09 14:57:14 crc kubenswrapper[5107]: cp -u $ca_file_path $reg_dir_path/ca.crt Dec 09 14:57:14 crc kubenswrapper[5107]: else Dec 09 14:57:14 crc kubenswrapper[5107]: mkdir $reg_dir_path Dec 09 14:57:14 crc kubenswrapper[5107]: cp $ca_file_path $reg_dir_path/ca.crt Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: done Dec 09 14:57:14 crc kubenswrapper[5107]: for d in $(ls /etc/docker/certs.d); do Dec 09 14:57:14 crc kubenswrapper[5107]: echo $d Dec 09 14:57:14 crc kubenswrapper[5107]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Dec 09 14:57:14 crc kubenswrapper[5107]: reg_conf_path="/tmp/serviceca/${dp}" Dec 09 14:57:14 crc kubenswrapper[5107]: if [ ! -e "${reg_conf_path}" ]; then Dec 09 14:57:14 crc kubenswrapper[5107]: rm -rf /etc/docker/certs.d/$d Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: done Dec 09 14:57:14 crc kubenswrapper[5107]: sleep 60 & wait ${!} Dec 09 14:57:14 crc kubenswrapper[5107]: done Dec 09 14:57:14 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sq48q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-gfdn8_openshift-image-registry(6f91a655-0e59-4855-bb0c-acbc64e10ed7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:57:14 crc kubenswrapper[5107]: > logger="UnhandledError" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.150890 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.151288 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-gfdn8" podUID="6f91a655-0e59-4855-bb0c-acbc64e10ed7" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.152932 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" Dec 09 14:57:14 crc kubenswrapper[5107]: W1209 14:57:14.154497 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod035458af_eba0_4241_bcac_4e11d6358b21.slice/crio-a820694d9153c2a954e23485a43029cd4958b9989b94aad2b90bac7eb0e544e7 WatchSource:0}: Error finding container a820694d9153c2a954e23485a43029cd4958b9989b94aad2b90bac7eb0e544e7: Status 404 returned error can't find the container with id a820694d9153c2a954e23485a43029cd4958b9989b94aad2b90bac7eb0e544e7 Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.158191 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:57:14 crc kubenswrapper[5107]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Dec 09 14:57:14 crc kubenswrapper[5107]: set -euo pipefail Dec 09 14:57:14 crc kubenswrapper[5107]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Dec 09 14:57:14 crc kubenswrapper[5107]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Dec 09 14:57:14 crc kubenswrapper[5107]: # As the secret mount is optional we must wait for the files to be present. Dec 09 14:57:14 crc kubenswrapper[5107]: # The service is created in monitor.yaml and this is created in sdn.yaml. Dec 09 14:57:14 crc kubenswrapper[5107]: TS=$(date +%s) Dec 09 14:57:14 crc kubenswrapper[5107]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Dec 09 14:57:14 crc kubenswrapper[5107]: HAS_LOGGED_INFO=0 Dec 09 14:57:14 crc kubenswrapper[5107]: Dec 09 14:57:14 crc kubenswrapper[5107]: log_missing_certs(){ Dec 09 14:57:14 crc kubenswrapper[5107]: CUR_TS=$(date +%s) Dec 09 14:57:14 crc kubenswrapper[5107]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Dec 09 14:57:14 crc kubenswrapper[5107]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Dec 09 14:57:14 crc kubenswrapper[5107]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Dec 09 14:57:14 crc kubenswrapper[5107]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Dec 09 14:57:14 crc kubenswrapper[5107]: HAS_LOGGED_INFO=1 Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: } Dec 09 14:57:14 crc kubenswrapper[5107]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Dec 09 14:57:14 crc kubenswrapper[5107]: log_missing_certs Dec 09 14:57:14 crc kubenswrapper[5107]: sleep 5 Dec 09 14:57:14 crc kubenswrapper[5107]: done Dec 09 14:57:14 crc kubenswrapper[5107]: Dec 09 14:57:14 crc kubenswrapper[5107]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Dec 09 14:57:14 crc kubenswrapper[5107]: exec /usr/bin/kube-rbac-proxy \ Dec 09 14:57:14 crc kubenswrapper[5107]: --logtostderr \ Dec 09 14:57:14 crc kubenswrapper[5107]: --secure-listen-address=:9108 \ Dec 09 14:57:14 crc kubenswrapper[5107]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Dec 09 14:57:14 crc kubenswrapper[5107]: --upstream=http://127.0.0.1:29108/ \ Dec 09 14:57:14 crc kubenswrapper[5107]: --tls-private-key-file=${TLS_PK} \ Dec 09 14:57:14 crc kubenswrapper[5107]: --tls-cert-file=${TLS_CERT} Dec 09 14:57:14 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jbbvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-6zphj_openshift-ovn-kubernetes(035458af-eba0-4241-bcac-4e11d6358b21): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:57:14 crc kubenswrapper[5107]: > logger="UnhandledError" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.159898 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.159940 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.159989 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.160011 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.160026 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:14Z","lastTransitionTime":"2025-12-09T14:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.160671 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:57:14 crc kubenswrapper[5107]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 09 14:57:14 crc kubenswrapper[5107]: if [[ -f "/env/_master" ]]; then Dec 09 14:57:14 crc kubenswrapper[5107]: set -o allexport Dec 09 14:57:14 crc kubenswrapper[5107]: source "/env/_master" Dec 09 14:57:14 crc kubenswrapper[5107]: set +o allexport Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: Dec 09 14:57:14 crc kubenswrapper[5107]: ovn_v4_join_subnet_opt= Dec 09 14:57:14 crc kubenswrapper[5107]: if [[ "" != "" ]]; then Dec 09 14:57:14 crc kubenswrapper[5107]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: ovn_v6_join_subnet_opt= Dec 09 14:57:14 crc kubenswrapper[5107]: if [[ "" != "" ]]; then Dec 09 14:57:14 crc kubenswrapper[5107]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: Dec 09 14:57:14 crc kubenswrapper[5107]: ovn_v4_transit_switch_subnet_opt= Dec 09 14:57:14 crc kubenswrapper[5107]: if [[ "" != "" ]]; then Dec 09 14:57:14 crc kubenswrapper[5107]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: ovn_v6_transit_switch_subnet_opt= Dec 09 14:57:14 crc kubenswrapper[5107]: if [[ "" != "" ]]; then Dec 09 14:57:14 crc kubenswrapper[5107]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: Dec 09 14:57:14 crc kubenswrapper[5107]: dns_name_resolver_enabled_flag= Dec 09 14:57:14 crc kubenswrapper[5107]: if [[ "false" == "true" ]]; then Dec 09 14:57:14 crc kubenswrapper[5107]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: Dec 09 14:57:14 crc kubenswrapper[5107]: persistent_ips_enabled_flag="--enable-persistent-ips" Dec 09 14:57:14 crc kubenswrapper[5107]: Dec 09 14:57:14 crc kubenswrapper[5107]: # This is needed so that converting clusters from GA to TP Dec 09 14:57:14 crc kubenswrapper[5107]: # will rollout control plane pods as well Dec 09 14:57:14 crc kubenswrapper[5107]: network_segmentation_enabled_flag= Dec 09 14:57:14 crc kubenswrapper[5107]: multi_network_enabled_flag= Dec 09 14:57:14 crc kubenswrapper[5107]: if [[ "true" == "true" ]]; then Dec 09 14:57:14 crc kubenswrapper[5107]: multi_network_enabled_flag="--enable-multi-network" Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: if [[ "true" == "true" ]]; then Dec 09 14:57:14 crc kubenswrapper[5107]: if [[ "true" != "true" ]]; then Dec 09 14:57:14 crc kubenswrapper[5107]: multi_network_enabled_flag="--enable-multi-network" Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: network_segmentation_enabled_flag="--enable-network-segmentation" Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: Dec 09 14:57:14 crc kubenswrapper[5107]: route_advertisements_enable_flag= Dec 09 14:57:14 crc kubenswrapper[5107]: if [[ "false" == "true" ]]; then Dec 09 14:57:14 crc kubenswrapper[5107]: route_advertisements_enable_flag="--enable-route-advertisements" Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: Dec 09 14:57:14 crc kubenswrapper[5107]: preconfigured_udn_addresses_enable_flag= Dec 09 14:57:14 crc kubenswrapper[5107]: if [[ "false" == "true" ]]; then Dec 09 14:57:14 crc kubenswrapper[5107]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: Dec 09 14:57:14 crc kubenswrapper[5107]: # Enable multi-network policy if configured (control-plane always full mode) Dec 09 14:57:14 crc kubenswrapper[5107]: multi_network_policy_enabled_flag= Dec 09 14:57:14 crc kubenswrapper[5107]: if [[ "false" == "true" ]]; then Dec 09 14:57:14 crc kubenswrapper[5107]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: Dec 09 14:57:14 crc kubenswrapper[5107]: # Enable admin network policy if configured (control-plane always full mode) Dec 09 14:57:14 crc kubenswrapper[5107]: admin_network_policy_enabled_flag= Dec 09 14:57:14 crc kubenswrapper[5107]: if [[ "true" == "true" ]]; then Dec 09 14:57:14 crc kubenswrapper[5107]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: Dec 09 14:57:14 crc kubenswrapper[5107]: if [ "shared" == "shared" ]; then Dec 09 14:57:14 crc kubenswrapper[5107]: gateway_mode_flags="--gateway-mode shared" Dec 09 14:57:14 crc kubenswrapper[5107]: elif [ "shared" == "local" ]; then Dec 09 14:57:14 crc kubenswrapper[5107]: gateway_mode_flags="--gateway-mode local" Dec 09 14:57:14 crc kubenswrapper[5107]: else Dec 09 14:57:14 crc kubenswrapper[5107]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Dec 09 14:57:14 crc kubenswrapper[5107]: exit 1 Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: Dec 09 14:57:14 crc kubenswrapper[5107]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Dec 09 14:57:14 crc kubenswrapper[5107]: exec /usr/bin/ovnkube \ Dec 09 14:57:14 crc kubenswrapper[5107]: --enable-interconnect \ Dec 09 14:57:14 crc kubenswrapper[5107]: --init-cluster-manager "${K8S_NODE}" \ Dec 09 14:57:14 crc kubenswrapper[5107]: --config-file=/run/ovnkube-config/ovnkube.conf \ Dec 09 14:57:14 crc kubenswrapper[5107]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Dec 09 14:57:14 crc kubenswrapper[5107]: --metrics-bind-address "127.0.0.1:29108" \ Dec 09 14:57:14 crc kubenswrapper[5107]: --metrics-enable-pprof \ Dec 09 14:57:14 crc kubenswrapper[5107]: --metrics-enable-config-duration \ Dec 09 14:57:14 crc kubenswrapper[5107]: ${ovn_v4_join_subnet_opt} \ Dec 09 14:57:14 crc kubenswrapper[5107]: ${ovn_v6_join_subnet_opt} \ Dec 09 14:57:14 crc kubenswrapper[5107]: ${ovn_v4_transit_switch_subnet_opt} \ Dec 09 14:57:14 crc kubenswrapper[5107]: ${ovn_v6_transit_switch_subnet_opt} \ Dec 09 14:57:14 crc kubenswrapper[5107]: ${dns_name_resolver_enabled_flag} \ Dec 09 14:57:14 crc kubenswrapper[5107]: ${persistent_ips_enabled_flag} \ Dec 09 14:57:14 crc kubenswrapper[5107]: ${multi_network_enabled_flag} \ Dec 09 14:57:14 crc kubenswrapper[5107]: ${network_segmentation_enabled_flag} \ Dec 09 14:57:14 crc kubenswrapper[5107]: ${gateway_mode_flags} \ Dec 09 14:57:14 crc kubenswrapper[5107]: ${route_advertisements_enable_flag} \ Dec 09 14:57:14 crc kubenswrapper[5107]: ${preconfigured_udn_addresses_enable_flag} \ Dec 09 14:57:14 crc kubenswrapper[5107]: --enable-egress-ip=true \ Dec 09 14:57:14 crc kubenswrapper[5107]: --enable-egress-firewall=true \ Dec 09 14:57:14 crc kubenswrapper[5107]: --enable-egress-qos=true \ Dec 09 14:57:14 crc kubenswrapper[5107]: --enable-egress-service=true \ Dec 09 14:57:14 crc kubenswrapper[5107]: --enable-multicast \ Dec 09 14:57:14 crc kubenswrapper[5107]: --enable-multi-external-gateway=true \ Dec 09 14:57:14 crc kubenswrapper[5107]: ${multi_network_policy_enabled_flag} \ Dec 09 14:57:14 crc kubenswrapper[5107]: ${admin_network_policy_enabled_flag} Dec 09 14:57:14 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jbbvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-6zphj_openshift-ovn-kubernetes(035458af-eba0-4241-bcac-4e11d6358b21): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:57:14 crc kubenswrapper[5107]: > logger="UnhandledError" Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.161888 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-6zphj" podUID="035458af-eba0-4241-bcac-4e11d6358b21" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.162325 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.169774 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-g7sv4" Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.176880 5107 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8snpj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-9jq8t_openshift-machine-config-operator(902902bc-6dc6-4c5f-8e1b-9399b7c813c7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.179160 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-s44qp" Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.180633 5107 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8snpj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-9jq8t_openshift-machine-config-operator(902902bc-6dc6-4c5f-8e1b-9399b7c813c7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.182600 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" podUID="902902bc-6dc6-4c5f-8e1b-9399b7c813c7" Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.189755 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:57:14 crc kubenswrapper[5107]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Dec 09 14:57:14 crc kubenswrapper[5107]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Dec 09 14:57:14 crc kubenswrapper[5107]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qr2g2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-g7sv4_openshift-multus(357946f5-b5ee-4739-a2c3-62beb5aedb57): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:57:14 crc kubenswrapper[5107]: > logger="UnhandledError" Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.190844 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-g7sv4" podUID="357946f5-b5ee-4739-a2c3-62beb5aedb57" Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.191508 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:57:14 crc kubenswrapper[5107]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Dec 09 14:57:14 crc kubenswrapper[5107]: apiVersion: v1 Dec 09 14:57:14 crc kubenswrapper[5107]: clusters: Dec 09 14:57:14 crc kubenswrapper[5107]: - cluster: Dec 09 14:57:14 crc kubenswrapper[5107]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Dec 09 14:57:14 crc kubenswrapper[5107]: server: https://api-int.crc.testing:6443 Dec 09 14:57:14 crc kubenswrapper[5107]: name: default-cluster Dec 09 14:57:14 crc kubenswrapper[5107]: contexts: Dec 09 14:57:14 crc kubenswrapper[5107]: - context: Dec 09 14:57:14 crc kubenswrapper[5107]: cluster: default-cluster Dec 09 14:57:14 crc kubenswrapper[5107]: namespace: default Dec 09 14:57:14 crc kubenswrapper[5107]: user: default-auth Dec 09 14:57:14 crc kubenswrapper[5107]: name: default-context Dec 09 14:57:14 crc kubenswrapper[5107]: current-context: default-context Dec 09 14:57:14 crc kubenswrapper[5107]: kind: Config Dec 09 14:57:14 crc kubenswrapper[5107]: preferences: {} Dec 09 14:57:14 crc kubenswrapper[5107]: users: Dec 09 14:57:14 crc kubenswrapper[5107]: - name: default-auth Dec 09 14:57:14 crc kubenswrapper[5107]: user: Dec 09 14:57:14 crc kubenswrapper[5107]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 09 14:57:14 crc kubenswrapper[5107]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 09 14:57:14 crc kubenswrapper[5107]: EOF Dec 09 14:57:14 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ljp8p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-9rjcr_openshift-ovn-kubernetes(b75d4675-9c37-47cf-8fa3-11097aa379ca): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:57:14 crc kubenswrapper[5107]: > logger="UnhandledError" Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.193442 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" podUID="b75d4675-9c37-47cf-8fa3-11097aa379ca" Dec 09 14:57:14 crc kubenswrapper[5107]: W1209 14:57:14.196375 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod468a62a3_c55d_40e0_bc1f_d01a979f017a.slice/crio-75a487902a3f1406df0ffd70d564879229f299aaea4d1a4eb8159b4091a3198c WatchSource:0}: Error finding container 75a487902a3f1406df0ffd70d564879229f299aaea4d1a4eb8159b4091a3198c: Status 404 returned error can't find the container with id 75a487902a3f1406df0ffd70d564879229f299aaea4d1a4eb8159b4091a3198c Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.199264 5107 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4hsll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-s44qp_openshift-multus(468a62a3-c55d-40e0-bc1f-d01a979f017a): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.200508 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-s44qp" podUID="468a62a3-c55d-40e0-bc1f-d01a979f017a" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.262898 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.262953 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.262969 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.262989 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.263005 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:14Z","lastTransitionTime":"2025-12-09T14:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.365889 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.365950 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.365968 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.365992 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.366009 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:14Z","lastTransitionTime":"2025-12-09T14:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.468924 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.469358 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.469370 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.469385 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.469395 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:14Z","lastTransitionTime":"2025-12-09T14:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.472639 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.472762 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.472824 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.472867 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.472897 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.473047 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.473064 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.473090 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.473082 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:57:15.473021701 +0000 UTC m=+83.196726590 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.473139 5107 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.473100 5107 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.473210 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-09 14:57:15.473189935 +0000 UTC m=+83.196894994 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.473101 5107 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.473246 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-09 14:57:15.473236006 +0000 UTC m=+83.196941085 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.473292 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-09 14:57:15.473272077 +0000 UTC m=+83.196976966 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.473070 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.473356 5107 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.473452 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-09 14:57:15.473429782 +0000 UTC m=+83.197134831 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.572534 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.572711 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.572741 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.572813 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.572841 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:14Z","lastTransitionTime":"2025-12-09T14:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.573488 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f154303d-e14b-4854-8f94-194d0f338f98-metrics-certs\") pod \"network-metrics-daemon-6xk48\" (UID: \"f154303d-e14b-4854-8f94-194d0f338f98\") " pod="openshift-multus/network-metrics-daemon-6xk48" Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.573909 5107 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.574125 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f154303d-e14b-4854-8f94-194d0f338f98-metrics-certs podName:f154303d-e14b-4854-8f94-194d0f338f98 nodeName:}" failed. No retries permitted until 2025-12-09 14:57:15.574090991 +0000 UTC m=+83.297795880 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f154303d-e14b-4854-8f94-194d0f338f98-metrics-certs") pod "network-metrics-daemon-6xk48" (UID: "f154303d-e14b-4854-8f94-194d0f338f98") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.675487 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.675844 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.675928 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.676015 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.676074 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:14Z","lastTransitionTime":"2025-12-09T14:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.779201 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.779271 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.779288 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.779309 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.779322 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:14Z","lastTransitionTime":"2025-12-09T14:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.794947 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" event={"ID":"b75d4675-9c37-47cf-8fa3-11097aa379ca","Type":"ContainerStarted","Data":"79f1c4a7c26eac86b0cbccd2041c843533457d514a69a9bd632969d3b1532e69"} Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.798907 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-gfdn8" event={"ID":"6f91a655-0e59-4855-bb0c-acbc64e10ed7","Type":"ContainerStarted","Data":"c8a2b125bb40a257eadb447bff03bd6d2d9491c68146c3ee8f8b6e51c803d919"} Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.800401 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:57:14 crc kubenswrapper[5107]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Dec 09 14:57:14 crc kubenswrapper[5107]: apiVersion: v1 Dec 09 14:57:14 crc kubenswrapper[5107]: clusters: Dec 09 14:57:14 crc kubenswrapper[5107]: - cluster: Dec 09 14:57:14 crc kubenswrapper[5107]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Dec 09 14:57:14 crc kubenswrapper[5107]: server: https://api-int.crc.testing:6443 Dec 09 14:57:14 crc kubenswrapper[5107]: name: default-cluster Dec 09 14:57:14 crc kubenswrapper[5107]: contexts: Dec 09 14:57:14 crc kubenswrapper[5107]: - context: Dec 09 14:57:14 crc kubenswrapper[5107]: cluster: default-cluster Dec 09 14:57:14 crc kubenswrapper[5107]: namespace: default Dec 09 14:57:14 crc kubenswrapper[5107]: user: default-auth Dec 09 14:57:14 crc kubenswrapper[5107]: name: default-context Dec 09 14:57:14 crc kubenswrapper[5107]: current-context: default-context Dec 09 14:57:14 crc kubenswrapper[5107]: kind: Config Dec 09 14:57:14 crc kubenswrapper[5107]: preferences: {} Dec 09 14:57:14 crc kubenswrapper[5107]: users: Dec 09 14:57:14 crc kubenswrapper[5107]: - name: default-auth Dec 09 14:57:14 crc kubenswrapper[5107]: user: Dec 09 14:57:14 crc kubenswrapper[5107]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 09 14:57:14 crc kubenswrapper[5107]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 09 14:57:14 crc kubenswrapper[5107]: EOF Dec 09 14:57:14 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ljp8p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-9rjcr_openshift-ovn-kubernetes(b75d4675-9c37-47cf-8fa3-11097aa379ca): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:57:14 crc kubenswrapper[5107]: > logger="UnhandledError" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.801026 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"ba7a3c95442dc472c129bd4a37613cef1e88004139dc95e095fd622e4f557cfc"} Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.801652 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" podUID="b75d4675-9c37-47cf-8fa3-11097aa379ca" Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.802132 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:57:14 crc kubenswrapper[5107]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Dec 09 14:57:14 crc kubenswrapper[5107]: while [ true ]; Dec 09 14:57:14 crc kubenswrapper[5107]: do Dec 09 14:57:14 crc kubenswrapper[5107]: for f in $(ls /tmp/serviceca); do Dec 09 14:57:14 crc kubenswrapper[5107]: echo $f Dec 09 14:57:14 crc kubenswrapper[5107]: ca_file_path="/tmp/serviceca/${f}" Dec 09 14:57:14 crc kubenswrapper[5107]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Dec 09 14:57:14 crc kubenswrapper[5107]: reg_dir_path="/etc/docker/certs.d/${f}" Dec 09 14:57:14 crc kubenswrapper[5107]: if [ -e "${reg_dir_path}" ]; then Dec 09 14:57:14 crc kubenswrapper[5107]: cp -u $ca_file_path $reg_dir_path/ca.crt Dec 09 14:57:14 crc kubenswrapper[5107]: else Dec 09 14:57:14 crc kubenswrapper[5107]: mkdir $reg_dir_path Dec 09 14:57:14 crc kubenswrapper[5107]: cp $ca_file_path $reg_dir_path/ca.crt Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: done Dec 09 14:57:14 crc kubenswrapper[5107]: for d in $(ls /etc/docker/certs.d); do Dec 09 14:57:14 crc kubenswrapper[5107]: echo $d Dec 09 14:57:14 crc kubenswrapper[5107]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Dec 09 14:57:14 crc kubenswrapper[5107]: reg_conf_path="/tmp/serviceca/${dp}" Dec 09 14:57:14 crc kubenswrapper[5107]: if [ ! -e "${reg_conf_path}" ]; then Dec 09 14:57:14 crc kubenswrapper[5107]: rm -rf /etc/docker/certs.d/$d Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: done Dec 09 14:57:14 crc kubenswrapper[5107]: sleep 60 & wait ${!} Dec 09 14:57:14 crc kubenswrapper[5107]: done Dec 09 14:57:14 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sq48q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-gfdn8_openshift-image-registry(6f91a655-0e59-4855-bb0c-acbc64e10ed7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:57:14 crc kubenswrapper[5107]: > logger="UnhandledError" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.803199 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" event={"ID":"902902bc-6dc6-4c5f-8e1b-9399b7c813c7","Type":"ContainerStarted","Data":"2912948a2008ba06c9cef75243e03942b5f03da7f02656b269c2b37e8dfdbd86"} Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.804421 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-gfdn8" podUID="6f91a655-0e59-4855-bb0c-acbc64e10ed7" Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.805725 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:57:14 crc kubenswrapper[5107]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 09 14:57:14 crc kubenswrapper[5107]: if [[ -f "/env/_master" ]]; then Dec 09 14:57:14 crc kubenswrapper[5107]: set -o allexport Dec 09 14:57:14 crc kubenswrapper[5107]: source "/env/_master" Dec 09 14:57:14 crc kubenswrapper[5107]: set +o allexport Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Dec 09 14:57:14 crc kubenswrapper[5107]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Dec 09 14:57:14 crc kubenswrapper[5107]: ho_enable="--enable-hybrid-overlay" Dec 09 14:57:14 crc kubenswrapper[5107]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Dec 09 14:57:14 crc kubenswrapper[5107]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Dec 09 14:57:14 crc kubenswrapper[5107]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Dec 09 14:57:14 crc kubenswrapper[5107]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 09 14:57:14 crc kubenswrapper[5107]: --webhook-cert-dir="/etc/webhook-cert" \ Dec 09 14:57:14 crc kubenswrapper[5107]: --webhook-host=127.0.0.1 \ Dec 09 14:57:14 crc kubenswrapper[5107]: --webhook-port=9743 \ Dec 09 14:57:14 crc kubenswrapper[5107]: ${ho_enable} \ Dec 09 14:57:14 crc kubenswrapper[5107]: --enable-interconnect \ Dec 09 14:57:14 crc kubenswrapper[5107]: --disable-approver \ Dec 09 14:57:14 crc kubenswrapper[5107]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Dec 09 14:57:14 crc kubenswrapper[5107]: --wait-for-kubernetes-api=200s \ Dec 09 14:57:14 crc kubenswrapper[5107]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Dec 09 14:57:14 crc kubenswrapper[5107]: --loglevel="${LOGLEVEL}" Dec 09 14:57:14 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:57:14 crc kubenswrapper[5107]: > logger="UnhandledError" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.807187 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-hk6gf" event={"ID":"6aac14e3-5594-400a-a5f6-f00359244626","Type":"ContainerStarted","Data":"b25751f36257ffde5ce7f8aadf5972d55ffb49f4ecf63e4b20a8a4c1556d26cb"} Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.809096 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:57:14 crc kubenswrapper[5107]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 09 14:57:14 crc kubenswrapper[5107]: if [[ -f "/env/_master" ]]; then Dec 09 14:57:14 crc kubenswrapper[5107]: set -o allexport Dec 09 14:57:14 crc kubenswrapper[5107]: source "/env/_master" Dec 09 14:57:14 crc kubenswrapper[5107]: set +o allexport Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: Dec 09 14:57:14 crc kubenswrapper[5107]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Dec 09 14:57:14 crc kubenswrapper[5107]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 09 14:57:14 crc kubenswrapper[5107]: --disable-webhook \ Dec 09 14:57:14 crc kubenswrapper[5107]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Dec 09 14:57:14 crc kubenswrapper[5107]: --loglevel="${LOGLEVEL}" Dec 09 14:57:14 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:57:14 crc kubenswrapper[5107]: > logger="UnhandledError" Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.809207 5107 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8snpj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-9jq8t_openshift-machine-config-operator(902902bc-6dc6-4c5f-8e1b-9399b7c813c7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.809268 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:57:14 crc kubenswrapper[5107]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Dec 09 14:57:14 crc kubenswrapper[5107]: set -uo pipefail Dec 09 14:57:14 crc kubenswrapper[5107]: Dec 09 14:57:14 crc kubenswrapper[5107]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Dec 09 14:57:14 crc kubenswrapper[5107]: Dec 09 14:57:14 crc kubenswrapper[5107]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Dec 09 14:57:14 crc kubenswrapper[5107]: HOSTS_FILE="/etc/hosts" Dec 09 14:57:14 crc kubenswrapper[5107]: TEMP_FILE="/tmp/hosts.tmp" Dec 09 14:57:14 crc kubenswrapper[5107]: Dec 09 14:57:14 crc kubenswrapper[5107]: IFS=', ' read -r -a services <<< "${SERVICES}" Dec 09 14:57:14 crc kubenswrapper[5107]: Dec 09 14:57:14 crc kubenswrapper[5107]: # Make a temporary file with the old hosts file's attributes. Dec 09 14:57:14 crc kubenswrapper[5107]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Dec 09 14:57:14 crc kubenswrapper[5107]: echo "Failed to preserve hosts file. Exiting." Dec 09 14:57:14 crc kubenswrapper[5107]: exit 1 Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: Dec 09 14:57:14 crc kubenswrapper[5107]: while true; do Dec 09 14:57:14 crc kubenswrapper[5107]: declare -A svc_ips Dec 09 14:57:14 crc kubenswrapper[5107]: for svc in "${services[@]}"; do Dec 09 14:57:14 crc kubenswrapper[5107]: # Fetch service IP from cluster dns if present. We make several tries Dec 09 14:57:14 crc kubenswrapper[5107]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Dec 09 14:57:14 crc kubenswrapper[5107]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Dec 09 14:57:14 crc kubenswrapper[5107]: # support UDP loadbalancers and require reaching DNS through TCP. Dec 09 14:57:14 crc kubenswrapper[5107]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 09 14:57:14 crc kubenswrapper[5107]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 09 14:57:14 crc kubenswrapper[5107]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 09 14:57:14 crc kubenswrapper[5107]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Dec 09 14:57:14 crc kubenswrapper[5107]: for i in ${!cmds[*]} Dec 09 14:57:14 crc kubenswrapper[5107]: do Dec 09 14:57:14 crc kubenswrapper[5107]: ips=($(eval "${cmds[i]}")) Dec 09 14:57:14 crc kubenswrapper[5107]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Dec 09 14:57:14 crc kubenswrapper[5107]: svc_ips["${svc}"]="${ips[@]}" Dec 09 14:57:14 crc kubenswrapper[5107]: break Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: done Dec 09 14:57:14 crc kubenswrapper[5107]: done Dec 09 14:57:14 crc kubenswrapper[5107]: Dec 09 14:57:14 crc kubenswrapper[5107]: # Update /etc/hosts only if we get valid service IPs Dec 09 14:57:14 crc kubenswrapper[5107]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Dec 09 14:57:14 crc kubenswrapper[5107]: # Stale entries could exist in /etc/hosts if the service is deleted Dec 09 14:57:14 crc kubenswrapper[5107]: if [[ -n "${svc_ips[*]-}" ]]; then Dec 09 14:57:14 crc kubenswrapper[5107]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Dec 09 14:57:14 crc kubenswrapper[5107]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Dec 09 14:57:14 crc kubenswrapper[5107]: # Only continue rebuilding the hosts entries if its original content is preserved Dec 09 14:57:14 crc kubenswrapper[5107]: sleep 60 & wait Dec 09 14:57:14 crc kubenswrapper[5107]: continue Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: Dec 09 14:57:14 crc kubenswrapper[5107]: # Append resolver entries for services Dec 09 14:57:14 crc kubenswrapper[5107]: rc=0 Dec 09 14:57:14 crc kubenswrapper[5107]: for svc in "${!svc_ips[@]}"; do Dec 09 14:57:14 crc kubenswrapper[5107]: for ip in ${svc_ips[${svc}]}; do Dec 09 14:57:14 crc kubenswrapper[5107]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Dec 09 14:57:14 crc kubenswrapper[5107]: done Dec 09 14:57:14 crc kubenswrapper[5107]: done Dec 09 14:57:14 crc kubenswrapper[5107]: if [[ $rc -ne 0 ]]; then Dec 09 14:57:14 crc kubenswrapper[5107]: sleep 60 & wait Dec 09 14:57:14 crc kubenswrapper[5107]: continue Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: Dec 09 14:57:14 crc kubenswrapper[5107]: Dec 09 14:57:14 crc kubenswrapper[5107]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Dec 09 14:57:14 crc kubenswrapper[5107]: # Replace /etc/hosts with our modified version if needed Dec 09 14:57:14 crc kubenswrapper[5107]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Dec 09 14:57:14 crc kubenswrapper[5107]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: sleep 60 & wait Dec 09 14:57:14 crc kubenswrapper[5107]: unset svc_ips Dec 09 14:57:14 crc kubenswrapper[5107]: done Dec 09 14:57:14 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gcnzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-hk6gf_openshift-dns(6aac14e3-5594-400a-a5f6-f00359244626): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:57:14 crc kubenswrapper[5107]: > logger="UnhandledError" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.809322 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g7sv4" event={"ID":"357946f5-b5ee-4739-a2c3-62beb5aedb57","Type":"ContainerStarted","Data":"87b64e4e3c4ae600c7eb74d9107bc4b598959e763292604d74ecd1b139bd7375"} Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.809723 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.811584 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"8570d620f9195d3b0eddb92b5f62deabce9032ecde7bbcef19a4037652b676bc"} Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.811643 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-s44qp" event={"ID":"468a62a3-c55d-40e0-bc1f-d01a979f017a","Type":"ContainerStarted","Data":"75a487902a3f1406df0ffd70d564879229f299aaea4d1a4eb8159b4091a3198c"} Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.811131 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.812474 5107 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8snpj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-9jq8t_openshift-machine-config-operator(902902bc-6dc6-4c5f-8e1b-9399b7c813c7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.812535 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-hk6gf" podUID="6aac14e3-5594-400a-a5f6-f00359244626" Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.812975 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:57:14 crc kubenswrapper[5107]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Dec 09 14:57:14 crc kubenswrapper[5107]: set -o allexport Dec 09 14:57:14 crc kubenswrapper[5107]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Dec 09 14:57:14 crc kubenswrapper[5107]: source /etc/kubernetes/apiserver-url.env Dec 09 14:57:14 crc kubenswrapper[5107]: else Dec 09 14:57:14 crc kubenswrapper[5107]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Dec 09 14:57:14 crc kubenswrapper[5107]: exit 1 Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Dec 09 14:57:14 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:57:14 crc kubenswrapper[5107]: > logger="UnhandledError" Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.813396 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:57:14 crc kubenswrapper[5107]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Dec 09 14:57:14 crc kubenswrapper[5107]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Dec 09 14:57:14 crc kubenswrapper[5107]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qr2g2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-g7sv4_openshift-multus(357946f5-b5ee-4739-a2c3-62beb5aedb57): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:57:14 crc kubenswrapper[5107]: > logger="UnhandledError" Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.813582 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" podUID="902902bc-6dc6-4c5f-8e1b-9399b7c813c7" Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.814044 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.814061 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-6zphj" event={"ID":"035458af-eba0-4241-bcac-4e11d6358b21","Type":"ContainerStarted","Data":"a820694d9153c2a954e23485a43029cd4958b9989b94aad2b90bac7eb0e544e7"} Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.814441 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-g7sv4" podUID="357946f5-b5ee-4739-a2c3-62beb5aedb57" Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.815344 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:57:14 crc kubenswrapper[5107]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Dec 09 14:57:14 crc kubenswrapper[5107]: set -euo pipefail Dec 09 14:57:14 crc kubenswrapper[5107]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Dec 09 14:57:14 crc kubenswrapper[5107]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Dec 09 14:57:14 crc kubenswrapper[5107]: # As the secret mount is optional we must wait for the files to be present. Dec 09 14:57:14 crc kubenswrapper[5107]: # The service is created in monitor.yaml and this is created in sdn.yaml. Dec 09 14:57:14 crc kubenswrapper[5107]: TS=$(date +%s) Dec 09 14:57:14 crc kubenswrapper[5107]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Dec 09 14:57:14 crc kubenswrapper[5107]: HAS_LOGGED_INFO=0 Dec 09 14:57:14 crc kubenswrapper[5107]: Dec 09 14:57:14 crc kubenswrapper[5107]: log_missing_certs(){ Dec 09 14:57:14 crc kubenswrapper[5107]: CUR_TS=$(date +%s) Dec 09 14:57:14 crc kubenswrapper[5107]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Dec 09 14:57:14 crc kubenswrapper[5107]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Dec 09 14:57:14 crc kubenswrapper[5107]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Dec 09 14:57:14 crc kubenswrapper[5107]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Dec 09 14:57:14 crc kubenswrapper[5107]: HAS_LOGGED_INFO=1 Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: } Dec 09 14:57:14 crc kubenswrapper[5107]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Dec 09 14:57:14 crc kubenswrapper[5107]: log_missing_certs Dec 09 14:57:14 crc kubenswrapper[5107]: sleep 5 Dec 09 14:57:14 crc kubenswrapper[5107]: done Dec 09 14:57:14 crc kubenswrapper[5107]: Dec 09 14:57:14 crc kubenswrapper[5107]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Dec 09 14:57:14 crc kubenswrapper[5107]: exec /usr/bin/kube-rbac-proxy \ Dec 09 14:57:14 crc kubenswrapper[5107]: --logtostderr \ Dec 09 14:57:14 crc kubenswrapper[5107]: --secure-listen-address=:9108 \ Dec 09 14:57:14 crc kubenswrapper[5107]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Dec 09 14:57:14 crc kubenswrapper[5107]: --upstream=http://127.0.0.1:29108/ \ Dec 09 14:57:14 crc kubenswrapper[5107]: --tls-private-key-file=${TLS_PK} \ Dec 09 14:57:14 crc kubenswrapper[5107]: --tls-cert-file=${TLS_CERT} Dec 09 14:57:14 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jbbvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-6zphj_openshift-ovn-kubernetes(035458af-eba0-4241-bcac-4e11d6358b21): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:57:14 crc kubenswrapper[5107]: > logger="UnhandledError" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.815426 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"96f951899650bdb4307344b8ff3f0a442e803e61e50bc6013663ccd82731f75e"} Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.815976 5107 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4hsll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-s44qp_openshift-multus(468a62a3-c55d-40e0-bc1f-d01a979f017a): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.816719 5107 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.817365 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:57:14 crc kubenswrapper[5107]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 09 14:57:14 crc kubenswrapper[5107]: if [[ -f "/env/_master" ]]; then Dec 09 14:57:14 crc kubenswrapper[5107]: set -o allexport Dec 09 14:57:14 crc kubenswrapper[5107]: source "/env/_master" Dec 09 14:57:14 crc kubenswrapper[5107]: set +o allexport Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: Dec 09 14:57:14 crc kubenswrapper[5107]: ovn_v4_join_subnet_opt= Dec 09 14:57:14 crc kubenswrapper[5107]: if [[ "" != "" ]]; then Dec 09 14:57:14 crc kubenswrapper[5107]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: ovn_v6_join_subnet_opt= Dec 09 14:57:14 crc kubenswrapper[5107]: if [[ "" != "" ]]; then Dec 09 14:57:14 crc kubenswrapper[5107]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: Dec 09 14:57:14 crc kubenswrapper[5107]: ovn_v4_transit_switch_subnet_opt= Dec 09 14:57:14 crc kubenswrapper[5107]: if [[ "" != "" ]]; then Dec 09 14:57:14 crc kubenswrapper[5107]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: ovn_v6_transit_switch_subnet_opt= Dec 09 14:57:14 crc kubenswrapper[5107]: if [[ "" != "" ]]; then Dec 09 14:57:14 crc kubenswrapper[5107]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: Dec 09 14:57:14 crc kubenswrapper[5107]: dns_name_resolver_enabled_flag= Dec 09 14:57:14 crc kubenswrapper[5107]: if [[ "false" == "true" ]]; then Dec 09 14:57:14 crc kubenswrapper[5107]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: Dec 09 14:57:14 crc kubenswrapper[5107]: persistent_ips_enabled_flag="--enable-persistent-ips" Dec 09 14:57:14 crc kubenswrapper[5107]: Dec 09 14:57:14 crc kubenswrapper[5107]: # This is needed so that converting clusters from GA to TP Dec 09 14:57:14 crc kubenswrapper[5107]: # will rollout control plane pods as well Dec 09 14:57:14 crc kubenswrapper[5107]: network_segmentation_enabled_flag= Dec 09 14:57:14 crc kubenswrapper[5107]: multi_network_enabled_flag= Dec 09 14:57:14 crc kubenswrapper[5107]: if [[ "true" == "true" ]]; then Dec 09 14:57:14 crc kubenswrapper[5107]: multi_network_enabled_flag="--enable-multi-network" Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: if [[ "true" == "true" ]]; then Dec 09 14:57:14 crc kubenswrapper[5107]: if [[ "true" != "true" ]]; then Dec 09 14:57:14 crc kubenswrapper[5107]: multi_network_enabled_flag="--enable-multi-network" Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: network_segmentation_enabled_flag="--enable-network-segmentation" Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: Dec 09 14:57:14 crc kubenswrapper[5107]: route_advertisements_enable_flag= Dec 09 14:57:14 crc kubenswrapper[5107]: if [[ "false" == "true" ]]; then Dec 09 14:57:14 crc kubenswrapper[5107]: route_advertisements_enable_flag="--enable-route-advertisements" Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: Dec 09 14:57:14 crc kubenswrapper[5107]: preconfigured_udn_addresses_enable_flag= Dec 09 14:57:14 crc kubenswrapper[5107]: if [[ "false" == "true" ]]; then Dec 09 14:57:14 crc kubenswrapper[5107]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: Dec 09 14:57:14 crc kubenswrapper[5107]: # Enable multi-network policy if configured (control-plane always full mode) Dec 09 14:57:14 crc kubenswrapper[5107]: multi_network_policy_enabled_flag= Dec 09 14:57:14 crc kubenswrapper[5107]: if [[ "false" == "true" ]]; then Dec 09 14:57:14 crc kubenswrapper[5107]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: Dec 09 14:57:14 crc kubenswrapper[5107]: # Enable admin network policy if configured (control-plane always full mode) Dec 09 14:57:14 crc kubenswrapper[5107]: admin_network_policy_enabled_flag= Dec 09 14:57:14 crc kubenswrapper[5107]: if [[ "true" == "true" ]]; then Dec 09 14:57:14 crc kubenswrapper[5107]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: Dec 09 14:57:14 crc kubenswrapper[5107]: if [ "shared" == "shared" ]; then Dec 09 14:57:14 crc kubenswrapper[5107]: gateway_mode_flags="--gateway-mode shared" Dec 09 14:57:14 crc kubenswrapper[5107]: elif [ "shared" == "local" ]; then Dec 09 14:57:14 crc kubenswrapper[5107]: gateway_mode_flags="--gateway-mode local" Dec 09 14:57:14 crc kubenswrapper[5107]: else Dec 09 14:57:14 crc kubenswrapper[5107]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Dec 09 14:57:14 crc kubenswrapper[5107]: exit 1 Dec 09 14:57:14 crc kubenswrapper[5107]: fi Dec 09 14:57:14 crc kubenswrapper[5107]: Dec 09 14:57:14 crc kubenswrapper[5107]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Dec 09 14:57:14 crc kubenswrapper[5107]: exec /usr/bin/ovnkube \ Dec 09 14:57:14 crc kubenswrapper[5107]: --enable-interconnect \ Dec 09 14:57:14 crc kubenswrapper[5107]: --init-cluster-manager "${K8S_NODE}" \ Dec 09 14:57:14 crc kubenswrapper[5107]: --config-file=/run/ovnkube-config/ovnkube.conf \ Dec 09 14:57:14 crc kubenswrapper[5107]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Dec 09 14:57:14 crc kubenswrapper[5107]: --metrics-bind-address "127.0.0.1:29108" \ Dec 09 14:57:14 crc kubenswrapper[5107]: --metrics-enable-pprof \ Dec 09 14:57:14 crc kubenswrapper[5107]: --metrics-enable-config-duration \ Dec 09 14:57:14 crc kubenswrapper[5107]: ${ovn_v4_join_subnet_opt} \ Dec 09 14:57:14 crc kubenswrapper[5107]: ${ovn_v6_join_subnet_opt} \ Dec 09 14:57:14 crc kubenswrapper[5107]: ${ovn_v4_transit_switch_subnet_opt} \ Dec 09 14:57:14 crc kubenswrapper[5107]: ${ovn_v6_transit_switch_subnet_opt} \ Dec 09 14:57:14 crc kubenswrapper[5107]: ${dns_name_resolver_enabled_flag} \ Dec 09 14:57:14 crc kubenswrapper[5107]: ${persistent_ips_enabled_flag} \ Dec 09 14:57:14 crc kubenswrapper[5107]: ${multi_network_enabled_flag} \ Dec 09 14:57:14 crc kubenswrapper[5107]: ${network_segmentation_enabled_flag} \ Dec 09 14:57:14 crc kubenswrapper[5107]: ${gateway_mode_flags} \ Dec 09 14:57:14 crc kubenswrapper[5107]: ${route_advertisements_enable_flag} \ Dec 09 14:57:14 crc kubenswrapper[5107]: ${preconfigured_udn_addresses_enable_flag} \ Dec 09 14:57:14 crc kubenswrapper[5107]: --enable-egress-ip=true \ Dec 09 14:57:14 crc kubenswrapper[5107]: --enable-egress-firewall=true \ Dec 09 14:57:14 crc kubenswrapper[5107]: --enable-egress-qos=true \ Dec 09 14:57:14 crc kubenswrapper[5107]: --enable-egress-service=true \ Dec 09 14:57:14 crc kubenswrapper[5107]: --enable-multicast \ Dec 09 14:57:14 crc kubenswrapper[5107]: --enable-multi-external-gateway=true \ Dec 09 14:57:14 crc kubenswrapper[5107]: ${multi_network_policy_enabled_flag} \ Dec 09 14:57:14 crc kubenswrapper[5107]: ${admin_network_policy_enabled_flag} Dec 09 14:57:14 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jbbvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-6zphj_openshift-ovn-kubernetes(035458af-eba0-4241-bcac-4e11d6358b21): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:57:14 crc kubenswrapper[5107]: > logger="UnhandledError" Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.817359 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-s44qp" podUID="468a62a3-c55d-40e0-bc1f-d01a979f017a" Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.818410 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Dec 09 14:57:14 crc kubenswrapper[5107]: E1209 14:57:14.818447 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-6zphj" podUID="035458af-eba0-4241-bcac-4e11d6358b21" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.821513 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.822806 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.823449 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.827738 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.829687 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.833762 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.836049 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.838540 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.839588 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b75d4675-9c37-47cf-8fa3-11097aa379ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9rjcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.842176 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.843757 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.845887 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.846842 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.848670 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.849940 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.850408 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-g7sv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"357946f5-b5ee-4739-a2c3-62beb5aedb57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qr2g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g7sv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.851641 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.852134 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.852887 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.854180 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.855262 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.856643 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.857736 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.859105 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.860960 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.861377 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.862731 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.863658 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.864866 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.866212 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.867246 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.867982 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.871282 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.871494 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-6zphj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"035458af-eba0-4241-bcac-4e11d6358b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbbvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbbvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-6zphj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.872039 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.873999 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.875207 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.878255 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.879693 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.880783 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.881530 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.881810 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.881852 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.881864 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.881884 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.881896 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:14Z","lastTransitionTime":"2025-12-09T14:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.882747 5107 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.882856 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.883880 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s44qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"468a62a3-c55d-40e0-bc1f-d01a979f017a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s44qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.886146 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.887199 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.888157 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.889792 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.890450 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.891999 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.892937 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.894160 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.895169 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.895743 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50482b5b-33e4-4375-b4ec-a1c0ebe2c67b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://a7a842582313d50b3a1310b8e3656dc938751f1b63c38bd4eef8a2b6fae1c928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://169d737323fa61136fa0b652a15316035559871415b44a78987aa3ba6b800da0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ff8ee8fa9bdc13d78b24a7ed20b77f712c6d70019508ed464dea1e799dfc607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dcfe82d78bf385817437f96e0cd90ec7f0e152520e5d2289ea61ded6ec88972e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcfe82d78bf385817437f96e0cd90ec7f0e152520e5d2289ea61ded6ec88972e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T14:57:03Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1209 14:57:02.715968 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 14:57:02.716095 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1209 14:57:02.716826 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1531350067/tls.crt::/tmp/serving-cert-1531350067/tls.key\\\\\\\"\\\\nI1209 14:57:02.999617 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 14:57:03.002958 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 14:57:03.003001 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 14:57:03.003048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 14:57:03.003055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 14:57:03.007135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 14:57:03.007175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:57:03.007181 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:57:03.007186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 14:57:03.007189 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 14:57:03.007194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 14:57:03.007197 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 14:57:03.007189 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 14:57:03.010146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:57:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bdd7c59df86281d094a2a2e4476c5b9b6871fbd53bdb2c63b08a1822f3692aed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://68a813d217d22a3cbcc2432609817ec96bf285d26990afb1a4552cae8b0ffd2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a813d217d22a3cbcc2432609817ec96bf285d26990afb1a4552cae8b0ffd2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.896871 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.897899 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.899229 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.900072 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.901445 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.902326 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.903911 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.905307 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.906723 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.907494 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.908658 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.908724 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.919134 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.929269 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.938288 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6xk48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f154303d-e14b-4854-8f94-194d0f338f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlxcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlxcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6xk48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.947909 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"902902bc-6dc6-4c5f-8e1b-9399b7c813c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9jq8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.968489 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6e4831-9436-4325-a9e7-527721a1b000\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://64d96edfb4f9b0ef8edd081ac1fa1a187a546788c54bb2dda482d17bcc17d37a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://065d21f3d7d7ddf58cbad5139235697b04e42f29e61ced7e1efba1de3555ae53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bf0a296ed891533100528ac09d9e9b54984dcbd0b82dc3b71f1fdbb6a16a436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4ea5d580221066f1f04f75db2bc68afd26f37ae4331f25097f74f72a526bb4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:58Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://4459987e648ce73b2166ed2c4bda9dcb074c0f7ed1087788e224ae7445163de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://31b13cf5f094ee3d16d72628b2be8256450a040cbf2aba08914cc67b41dd22fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b13cf5f094ee3d16d72628b2be8256450a040cbf2aba08914cc67b41dd22fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://3766645d2e3d50636dc90c9b7b1e6a272a668ddd8cd9b12ec054f6c6ac82bf52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3766645d2e3d50636dc90c9b7b1e6a272a668ddd8cd9b12ec054f6c6ac82bf52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://5cc408bdf5a7d3b3392428cdf3c7bcf13053a1c50c74b5f64015cbb071affce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cc408bdf5a7d3b3392428cdf3c7bcf13053a1c50c74b5f64015cbb071affce0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:56Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.981051 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059f16de-7ff9-4b6b-9730-03b18da8c48a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d7820e8e52e22f661c6cb580380ffd7f90da43e5aa390c88f1d73ea9a5afe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://770b2ea4f5cd64cd90ef891f9e2a1f60e78f0b284926af871933d7ca8898b9ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5da0b9bfefb1775c7984b5581acb0ff5ac968a5aa3abacc68f3a3cad90bed680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3b741aedb777807194ba434397b4d2a6063c1648282792a0f733d3b340b26774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.985326 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.985396 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.985410 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.985452 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.985466 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:14Z","lastTransitionTime":"2025-12-09T14:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.990733 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-hk6gf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aac14e3-5594-400a-a5f6-f00359244626\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gcnzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hk6gf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:14 crc kubenswrapper[5107]: I1209 14:57:14.999014 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gfdn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f91a655-0e59-4855-bb0c-acbc64e10ed7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq48q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gfdn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.007712 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17d90584-600e-4a2c-858d-ac2100e604ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://90d46289c8ca432b46932951c07a149bbafcc7176ad9849b227f5651b06a547e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://082babd1585ebe1400244484d45e6ad956bbf945dd0d12b138be216b4fba3b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://082babd1585ebe1400244484d45e6ad956bbf945dd0d12b138be216b4fba3b8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.020514 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6efe267-4ba8-4dba-82ff-0482a7faa4cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://904e724482192717fecf2153d5eb92a8c15cff7f211c05c086e0078ea27b2c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b2ee19d9b0c3ece70ee343d7e6d4270979d47c693eb4259e6765fa9cad9e1da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9deb047f092d62f6fbf741f0935ccfd2452b426c60e8583ed86fb57f9cd9c940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1acf3dcf34f39f83e79fea39ae6c0addc25edf7d233ff3106e740d1c52c1a5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acf3dcf34f39f83e79fea39ae6c0addc25edf7d233ff3106e740d1c52c1a5a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.030582 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-hk6gf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aac14e3-5594-400a-a5f6-f00359244626\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gcnzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hk6gf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.039321 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gfdn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f91a655-0e59-4855-bb0c-acbc64e10ed7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq48q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gfdn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.047758 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17d90584-600e-4a2c-858d-ac2100e604ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://90d46289c8ca432b46932951c07a149bbafcc7176ad9849b227f5651b06a547e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://082babd1585ebe1400244484d45e6ad956bbf945dd0d12b138be216b4fba3b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://082babd1585ebe1400244484d45e6ad956bbf945dd0d12b138be216b4fba3b8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.057756 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6efe267-4ba8-4dba-82ff-0482a7faa4cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://904e724482192717fecf2153d5eb92a8c15cff7f211c05c086e0078ea27b2c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b2ee19d9b0c3ece70ee343d7e6d4270979d47c693eb4259e6765fa9cad9e1da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9deb047f092d62f6fbf741f0935ccfd2452b426c60e8583ed86fb57f9cd9c940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1acf3dcf34f39f83e79fea39ae6c0addc25edf7d233ff3106e740d1c52c1a5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acf3dcf34f39f83e79fea39ae6c0addc25edf7d233ff3106e740d1c52c1a5a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.068262 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.077433 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.088040 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.088093 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.088103 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.088120 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.088132 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:15Z","lastTransitionTime":"2025-12-09T14:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.094135 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b75d4675-9c37-47cf-8fa3-11097aa379ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9rjcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.103812 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-g7sv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"357946f5-b5ee-4739-a2c3-62beb5aedb57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qr2g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g7sv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.112437 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.119779 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-6zphj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"035458af-eba0-4241-bcac-4e11d6358b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbbvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbbvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-6zphj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.136832 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s44qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"468a62a3-c55d-40e0-bc1f-d01a979f017a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s44qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.147763 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50482b5b-33e4-4375-b4ec-a1c0ebe2c67b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://a7a842582313d50b3a1310b8e3656dc938751f1b63c38bd4eef8a2b6fae1c928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://169d737323fa61136fa0b652a15316035559871415b44a78987aa3ba6b800da0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ff8ee8fa9bdc13d78b24a7ed20b77f712c6d70019508ed464dea1e799dfc607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dcfe82d78bf385817437f96e0cd90ec7f0e152520e5d2289ea61ded6ec88972e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcfe82d78bf385817437f96e0cd90ec7f0e152520e5d2289ea61ded6ec88972e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T14:57:03Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1209 14:57:02.715968 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 14:57:02.716095 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1209 14:57:02.716826 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1531350067/tls.crt::/tmp/serving-cert-1531350067/tls.key\\\\\\\"\\\\nI1209 14:57:02.999617 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 14:57:03.002958 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 14:57:03.003001 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 14:57:03.003048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 14:57:03.003055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 14:57:03.007135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 14:57:03.007175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:57:03.007181 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:57:03.007186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 14:57:03.007189 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 14:57:03.007194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 14:57:03.007197 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 14:57:03.007189 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 14:57:03.010146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:57:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bdd7c59df86281d094a2a2e4476c5b9b6871fbd53bdb2c63b08a1822f3692aed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://68a813d217d22a3cbcc2432609817ec96bf285d26990afb1a4552cae8b0ffd2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a813d217d22a3cbcc2432609817ec96bf285d26990afb1a4552cae8b0ffd2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.166471 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.190089 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.190414 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.190531 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.190646 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.190743 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:15Z","lastTransitionTime":"2025-12-09T14:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.207168 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.250174 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.287630 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6xk48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f154303d-e14b-4854-8f94-194d0f338f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlxcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlxcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6xk48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.293126 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.293252 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.293386 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.293458 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.293526 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:15Z","lastTransitionTime":"2025-12-09T14:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.330044 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"902902bc-6dc6-4c5f-8e1b-9399b7c813c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9jq8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.379235 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6e4831-9436-4325-a9e7-527721a1b000\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://64d96edfb4f9b0ef8edd081ac1fa1a187a546788c54bb2dda482d17bcc17d37a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://065d21f3d7d7ddf58cbad5139235697b04e42f29e61ced7e1efba1de3555ae53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bf0a296ed891533100528ac09d9e9b54984dcbd0b82dc3b71f1fdbb6a16a436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4ea5d580221066f1f04f75db2bc68afd26f37ae4331f25097f74f72a526bb4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:58Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://4459987e648ce73b2166ed2c4bda9dcb074c0f7ed1087788e224ae7445163de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://31b13cf5f094ee3d16d72628b2be8256450a040cbf2aba08914cc67b41dd22fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b13cf5f094ee3d16d72628b2be8256450a040cbf2aba08914cc67b41dd22fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://3766645d2e3d50636dc90c9b7b1e6a272a668ddd8cd9b12ec054f6c6ac82bf52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3766645d2e3d50636dc90c9b7b1e6a272a668ddd8cd9b12ec054f6c6ac82bf52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://5cc408bdf5a7d3b3392428cdf3c7bcf13053a1c50c74b5f64015cbb071affce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cc408bdf5a7d3b3392428cdf3c7bcf13053a1c50c74b5f64015cbb071affce0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:56Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.395989 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.396277 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.396560 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.396768 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.396844 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:15Z","lastTransitionTime":"2025-12-09T14:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.407967 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059f16de-7ff9-4b6b-9730-03b18da8c48a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d7820e8e52e22f661c6cb580380ffd7f90da43e5aa390c88f1d73ea9a5afe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://770b2ea4f5cd64cd90ef891f9e2a1f60e78f0b284926af871933d7ca8898b9ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5da0b9bfefb1775c7984b5581acb0ff5ac968a5aa3abacc68f3a3cad90bed680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3b741aedb777807194ba434397b4d2a6063c1648282792a0f733d3b340b26774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.484864 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.484984 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.485014 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:57:15 crc kubenswrapper[5107]: E1209 14:57:15.485070 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:57:17.48503154 +0000 UTC m=+85.208736429 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:57:15 crc kubenswrapper[5107]: E1209 14:57:15.485105 5107 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:57:15 crc kubenswrapper[5107]: E1209 14:57:15.485150 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:57:15 crc kubenswrapper[5107]: E1209 14:57:15.485167 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:57:15 crc kubenswrapper[5107]: E1209 14:57:15.485172 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-09 14:57:17.485153253 +0000 UTC m=+85.208858192 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:57:15 crc kubenswrapper[5107]: E1209 14:57:15.485179 5107 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:57:15 crc kubenswrapper[5107]: E1209 14:57:15.485215 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-09 14:57:17.485205845 +0000 UTC m=+85.208910734 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.485207 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.485272 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:57:15 crc kubenswrapper[5107]: E1209 14:57:15.485305 5107 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:57:15 crc kubenswrapper[5107]: E1209 14:57:15.485353 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:57:15 crc kubenswrapper[5107]: E1209 14:57:15.485371 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-09 14:57:17.485363019 +0000 UTC m=+85.209067908 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:57:15 crc kubenswrapper[5107]: E1209 14:57:15.485377 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:57:15 crc kubenswrapper[5107]: E1209 14:57:15.485393 5107 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:57:15 crc kubenswrapper[5107]: E1209 14:57:15.485426 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-09 14:57:17.48541771 +0000 UTC m=+85.209122599 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.498881 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.498945 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.498956 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.498971 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.498982 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:15Z","lastTransitionTime":"2025-12-09T14:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.586307 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f154303d-e14b-4854-8f94-194d0f338f98-metrics-certs\") pod \"network-metrics-daemon-6xk48\" (UID: \"f154303d-e14b-4854-8f94-194d0f338f98\") " pod="openshift-multus/network-metrics-daemon-6xk48" Dec 09 14:57:15 crc kubenswrapper[5107]: E1209 14:57:15.586476 5107 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:57:15 crc kubenswrapper[5107]: E1209 14:57:15.586901 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f154303d-e14b-4854-8f94-194d0f338f98-metrics-certs podName:f154303d-e14b-4854-8f94-194d0f338f98 nodeName:}" failed. No retries permitted until 2025-12-09 14:57:17.586883472 +0000 UTC m=+85.310588361 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f154303d-e14b-4854-8f94-194d0f338f98-metrics-certs") pod "network-metrics-daemon-6xk48" (UID: "f154303d-e14b-4854-8f94-194d0f338f98") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.600631 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.600688 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.600699 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.600715 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.600725 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:15Z","lastTransitionTime":"2025-12-09T14:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.702547 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.702591 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.702600 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.702615 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.702624 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:15Z","lastTransitionTime":"2025-12-09T14:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.804948 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.805025 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.805045 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.805069 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.805087 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:15Z","lastTransitionTime":"2025-12-09T14:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.816957 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6xk48" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.816962 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:57:15 crc kubenswrapper[5107]: E1209 14:57:15.817066 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6xk48" podUID="f154303d-e14b-4854-8f94-194d0f338f98" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.817106 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:57:15 crc kubenswrapper[5107]: E1209 14:57:15.817191 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:57:15 crc kubenswrapper[5107]: E1209 14:57:15.817293 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.817924 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:57:15 crc kubenswrapper[5107]: E1209 14:57:15.818065 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.907524 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.908199 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.908321 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.908465 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:15 crc kubenswrapper[5107]: I1209 14:57:15.908559 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:15Z","lastTransitionTime":"2025-12-09T14:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.011060 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.011454 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.011530 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.011602 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.011714 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:16Z","lastTransitionTime":"2025-12-09T14:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.114492 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.114810 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.114874 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.114937 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.114999 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:16Z","lastTransitionTime":"2025-12-09T14:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.218479 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.218533 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.218546 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.218566 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.218580 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:16Z","lastTransitionTime":"2025-12-09T14:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.321608 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.321973 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.322100 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.322263 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.322393 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:16Z","lastTransitionTime":"2025-12-09T14:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.424348 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.424719 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.424834 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.424932 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.425022 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:16Z","lastTransitionTime":"2025-12-09T14:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.527213 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.527652 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.527831 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.527996 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.528130 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:16Z","lastTransitionTime":"2025-12-09T14:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.630456 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.630508 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.630521 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.630541 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.630554 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:16Z","lastTransitionTime":"2025-12-09T14:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.732895 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.733215 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.733284 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.733380 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.733460 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:16Z","lastTransitionTime":"2025-12-09T14:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.836387 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.836831 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.837241 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.837688 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.837771 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:16Z","lastTransitionTime":"2025-12-09T14:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.940049 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.940113 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.940132 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.940154 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:16 crc kubenswrapper[5107]: I1209 14:57:16.940172 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:16Z","lastTransitionTime":"2025-12-09T14:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.043266 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.043310 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.043320 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.043348 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.043358 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:17Z","lastTransitionTime":"2025-12-09T14:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.145068 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.145368 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.145446 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.145524 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.145594 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:17Z","lastTransitionTime":"2025-12-09T14:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.248687 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.248756 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.248772 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.248792 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.248805 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:17Z","lastTransitionTime":"2025-12-09T14:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.351708 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.351769 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.351783 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.351802 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.351814 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:17Z","lastTransitionTime":"2025-12-09T14:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.454371 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.454769 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.454860 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.454947 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.455013 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:17Z","lastTransitionTime":"2025-12-09T14:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.512254 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:57:17 crc kubenswrapper[5107]: E1209 14:57:17.512621 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:57:21.512536802 +0000 UTC m=+89.236241721 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.512987 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.513134 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.513288 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.513557 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:57:17 crc kubenswrapper[5107]: E1209 14:57:17.513156 5107 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:57:17 crc kubenswrapper[5107]: E1209 14:57:17.513285 5107 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:57:17 crc kubenswrapper[5107]: E1209 14:57:17.513437 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:57:17 crc kubenswrapper[5107]: E1209 14:57:17.513950 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:57:17 crc kubenswrapper[5107]: E1209 14:57:17.513969 5107 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:57:17 crc kubenswrapper[5107]: E1209 14:57:17.513664 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:57:17 crc kubenswrapper[5107]: E1209 14:57:17.514012 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:57:17 crc kubenswrapper[5107]: E1209 14:57:17.514021 5107 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:57:17 crc kubenswrapper[5107]: E1209 14:57:17.513840 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-09 14:57:21.513810857 +0000 UTC m=+89.237515746 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:57:17 crc kubenswrapper[5107]: E1209 14:57:17.514059 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-09 14:57:21.514046983 +0000 UTC m=+89.237751872 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:57:17 crc kubenswrapper[5107]: E1209 14:57:17.514081 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-09 14:57:21.514072073 +0000 UTC m=+89.237776962 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:57:17 crc kubenswrapper[5107]: E1209 14:57:17.514093 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-09 14:57:21.514087454 +0000 UTC m=+89.237792343 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.557754 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.557806 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.557816 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.557832 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.557842 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:17Z","lastTransitionTime":"2025-12-09T14:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.614893 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f154303d-e14b-4854-8f94-194d0f338f98-metrics-certs\") pod \"network-metrics-daemon-6xk48\" (UID: \"f154303d-e14b-4854-8f94-194d0f338f98\") " pod="openshift-multus/network-metrics-daemon-6xk48" Dec 09 14:57:17 crc kubenswrapper[5107]: E1209 14:57:17.615053 5107 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:57:17 crc kubenswrapper[5107]: E1209 14:57:17.615133 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f154303d-e14b-4854-8f94-194d0f338f98-metrics-certs podName:f154303d-e14b-4854-8f94-194d0f338f98 nodeName:}" failed. No retries permitted until 2025-12-09 14:57:21.615111873 +0000 UTC m=+89.338816762 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f154303d-e14b-4854-8f94-194d0f338f98-metrics-certs") pod "network-metrics-daemon-6xk48" (UID: "f154303d-e14b-4854-8f94-194d0f338f98") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.660891 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.661169 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.661235 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.661320 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.661412 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:17Z","lastTransitionTime":"2025-12-09T14:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.703107 5107 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.763840 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.763931 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.763959 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.763990 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.764015 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:17Z","lastTransitionTime":"2025-12-09T14:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.817011 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.817011 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.817014 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:57:17 crc kubenswrapper[5107]: E1209 14:57:17.817200 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:57:17 crc kubenswrapper[5107]: E1209 14:57:17.817492 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.817539 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6xk48" Dec 09 14:57:17 crc kubenswrapper[5107]: E1209 14:57:17.817677 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:57:17 crc kubenswrapper[5107]: E1209 14:57:17.817790 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6xk48" podUID="f154303d-e14b-4854-8f94-194d0f338f98" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.867630 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.867743 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.867763 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.867829 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.867851 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:17Z","lastTransitionTime":"2025-12-09T14:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.970160 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.970208 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.970220 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.970256 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:17 crc kubenswrapper[5107]: I1209 14:57:17.970265 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:17Z","lastTransitionTime":"2025-12-09T14:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:18 crc kubenswrapper[5107]: I1209 14:57:18.072713 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:18 crc kubenswrapper[5107]: I1209 14:57:18.072763 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:18 crc kubenswrapper[5107]: I1209 14:57:18.072775 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:18 crc kubenswrapper[5107]: I1209 14:57:18.072794 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:18 crc kubenswrapper[5107]: I1209 14:57:18.072809 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:18Z","lastTransitionTime":"2025-12-09T14:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:18 crc kubenswrapper[5107]: I1209 14:57:18.175634 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:18 crc kubenswrapper[5107]: I1209 14:57:18.175841 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:18 crc kubenswrapper[5107]: I1209 14:57:18.175863 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:18 crc kubenswrapper[5107]: I1209 14:57:18.175896 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:18 crc kubenswrapper[5107]: I1209 14:57:18.175914 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:18Z","lastTransitionTime":"2025-12-09T14:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:18 crc kubenswrapper[5107]: I1209 14:57:18.278441 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:18 crc kubenswrapper[5107]: I1209 14:57:18.278708 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:18 crc kubenswrapper[5107]: I1209 14:57:18.278776 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:18 crc kubenswrapper[5107]: I1209 14:57:18.278837 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:18 crc kubenswrapper[5107]: I1209 14:57:18.278901 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:18Z","lastTransitionTime":"2025-12-09T14:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:18 crc kubenswrapper[5107]: I1209 14:57:18.382077 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:18 crc kubenswrapper[5107]: I1209 14:57:18.382451 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:18 crc kubenswrapper[5107]: I1209 14:57:18.382719 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:18 crc kubenswrapper[5107]: I1209 14:57:18.382883 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:18 crc kubenswrapper[5107]: I1209 14:57:18.383040 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:18Z","lastTransitionTime":"2025-12-09T14:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:18 crc kubenswrapper[5107]: I1209 14:57:18.485477 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:18 crc kubenswrapper[5107]: I1209 14:57:18.485827 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:18 crc kubenswrapper[5107]: I1209 14:57:18.485950 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:18 crc kubenswrapper[5107]: I1209 14:57:18.486062 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:18 crc kubenswrapper[5107]: I1209 14:57:18.486172 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:18Z","lastTransitionTime":"2025-12-09T14:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:18 crc kubenswrapper[5107]: I1209 14:57:18.589137 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:18 crc kubenswrapper[5107]: I1209 14:57:18.589498 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:18 crc kubenswrapper[5107]: I1209 14:57:18.589610 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:18 crc kubenswrapper[5107]: I1209 14:57:18.589765 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:18 crc kubenswrapper[5107]: I1209 14:57:18.589864 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:18Z","lastTransitionTime":"2025-12-09T14:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:18 crc kubenswrapper[5107]: I1209 14:57:18.692649 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:18 crc kubenswrapper[5107]: I1209 14:57:18.692962 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:18 crc kubenswrapper[5107]: I1209 14:57:18.693037 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:18 crc kubenswrapper[5107]: I1209 14:57:18.693123 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:18 crc kubenswrapper[5107]: I1209 14:57:18.693197 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:18Z","lastTransitionTime":"2025-12-09T14:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:18 crc kubenswrapper[5107]: I1209 14:57:18.796460 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:18 crc kubenswrapper[5107]: I1209 14:57:18.796514 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:18 crc kubenswrapper[5107]: I1209 14:57:18.796525 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:18 crc kubenswrapper[5107]: I1209 14:57:18.796542 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:18 crc kubenswrapper[5107]: I1209 14:57:18.796553 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:18Z","lastTransitionTime":"2025-12-09T14:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:18 crc kubenswrapper[5107]: I1209 14:57:18.898858 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:18 crc kubenswrapper[5107]: I1209 14:57:18.898942 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:18 crc kubenswrapper[5107]: I1209 14:57:18.898963 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:18 crc kubenswrapper[5107]: I1209 14:57:18.898992 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:18 crc kubenswrapper[5107]: I1209 14:57:18.899015 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:18Z","lastTransitionTime":"2025-12-09T14:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.001273 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.001313 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.001326 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.001358 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.001375 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:19Z","lastTransitionTime":"2025-12-09T14:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.104016 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.104066 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.104076 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.104092 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.104103 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:19Z","lastTransitionTime":"2025-12-09T14:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.207180 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.207253 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.207271 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.207299 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.207315 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:19Z","lastTransitionTime":"2025-12-09T14:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.309572 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.309617 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.309630 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.309644 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.309654 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:19Z","lastTransitionTime":"2025-12-09T14:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.413238 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.413318 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.413382 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.413411 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.413432 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:19Z","lastTransitionTime":"2025-12-09T14:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.516419 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.516469 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.516484 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.516502 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.516513 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:19Z","lastTransitionTime":"2025-12-09T14:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.619670 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.619738 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.619757 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.619783 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.619802 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:19Z","lastTransitionTime":"2025-12-09T14:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.722204 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.722329 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.722359 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.722377 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.722390 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:19Z","lastTransitionTime":"2025-12-09T14:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.817761 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6xk48" Dec 09 14:57:19 crc kubenswrapper[5107]: E1209 14:57:19.817966 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6xk48" podUID="f154303d-e14b-4854-8f94-194d0f338f98" Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.818538 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:57:19 crc kubenswrapper[5107]: E1209 14:57:19.818637 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.818716 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:57:19 crc kubenswrapper[5107]: E1209 14:57:19.818791 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.818870 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:57:19 crc kubenswrapper[5107]: E1209 14:57:19.819070 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.824923 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.824969 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.824982 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.825001 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.825013 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:19Z","lastTransitionTime":"2025-12-09T14:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.927846 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.927903 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.927916 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.927934 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:19 crc kubenswrapper[5107]: I1209 14:57:19.927946 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:19Z","lastTransitionTime":"2025-12-09T14:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.030098 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.030947 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.031022 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.031086 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.031155 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:20Z","lastTransitionTime":"2025-12-09T14:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.133798 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.133881 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.133898 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.133915 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.133930 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:20Z","lastTransitionTime":"2025-12-09T14:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.236378 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.236455 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.236469 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.236494 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.236511 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:20Z","lastTransitionTime":"2025-12-09T14:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.339430 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.339478 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.339489 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.339506 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.339517 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:20Z","lastTransitionTime":"2025-12-09T14:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.441488 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.441531 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.441539 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.441553 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.441562 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:20Z","lastTransitionTime":"2025-12-09T14:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.543878 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.543953 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.543963 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.543979 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.543990 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:20Z","lastTransitionTime":"2025-12-09T14:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.646048 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.646098 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.646108 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.646125 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.646136 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:20Z","lastTransitionTime":"2025-12-09T14:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.747987 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.748081 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.748103 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.748130 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.748157 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:20Z","lastTransitionTime":"2025-12-09T14:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.850415 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.850493 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.850504 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.850521 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.850533 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:20Z","lastTransitionTime":"2025-12-09T14:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.953754 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.953827 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.953840 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.953859 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:20 crc kubenswrapper[5107]: I1209 14:57:20.953871 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:20Z","lastTransitionTime":"2025-12-09T14:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.034490 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.034542 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.034556 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.034620 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.034638 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:21Z","lastTransitionTime":"2025-12-09T14:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:21 crc kubenswrapper[5107]: E1209 14:57:21.047457 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9b1559a0-2d18-46c4-a06d-382661d2a0c3\\\",\\\"systemUUID\\\":\\\"084757af-33e8-4017-8563-50553d5c8b31\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.050797 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.050853 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.050880 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.050897 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.050908 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:21Z","lastTransitionTime":"2025-12-09T14:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:21 crc kubenswrapper[5107]: E1209 14:57:21.059878 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9b1559a0-2d18-46c4-a06d-382661d2a0c3\\\",\\\"systemUUID\\\":\\\"084757af-33e8-4017-8563-50553d5c8b31\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.063282 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.063393 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.063411 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.063451 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.063468 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:21Z","lastTransitionTime":"2025-12-09T14:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:21 crc kubenswrapper[5107]: E1209 14:57:21.073404 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9b1559a0-2d18-46c4-a06d-382661d2a0c3\\\",\\\"systemUUID\\\":\\\"084757af-33e8-4017-8563-50553d5c8b31\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.077230 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.077267 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.077278 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.077293 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.077307 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:21Z","lastTransitionTime":"2025-12-09T14:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:21 crc kubenswrapper[5107]: E1209 14:57:21.086045 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9b1559a0-2d18-46c4-a06d-382661d2a0c3\\\",\\\"systemUUID\\\":\\\"084757af-33e8-4017-8563-50553d5c8b31\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.089815 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.089850 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.089860 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.089874 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.089885 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:21Z","lastTransitionTime":"2025-12-09T14:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:21 crc kubenswrapper[5107]: E1209 14:57:21.105380 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9b1559a0-2d18-46c4-a06d-382661d2a0c3\\\",\\\"systemUUID\\\":\\\"084757af-33e8-4017-8563-50553d5c8b31\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:21 crc kubenswrapper[5107]: E1209 14:57:21.105606 5107 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.107259 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.107307 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.107320 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.107363 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.107377 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:21Z","lastTransitionTime":"2025-12-09T14:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.210374 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.210464 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.210482 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.210501 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.210514 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:21Z","lastTransitionTime":"2025-12-09T14:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.313598 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.313648 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.313661 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.313680 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.313692 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:21Z","lastTransitionTime":"2025-12-09T14:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.415685 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.415736 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.415746 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.415770 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.415781 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:21Z","lastTransitionTime":"2025-12-09T14:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.519057 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.519129 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.519148 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.519176 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.519196 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:21Z","lastTransitionTime":"2025-12-09T14:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.559008 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:57:21 crc kubenswrapper[5107]: E1209 14:57:21.559276 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:57:29.559229181 +0000 UTC m=+97.282934110 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.559463 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.559605 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.559744 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:57:21 crc kubenswrapper[5107]: E1209 14:57:21.559744 5107 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:57:21 crc kubenswrapper[5107]: E1209 14:57:21.559917 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:57:21 crc kubenswrapper[5107]: E1209 14:57:21.559970 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:57:21 crc kubenswrapper[5107]: E1209 14:57:21.559984 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-09 14:57:29.559952491 +0000 UTC m=+97.283657460 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:57:21 crc kubenswrapper[5107]: E1209 14:57:21.559999 5107 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.559853 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:57:21 crc kubenswrapper[5107]: E1209 14:57:21.560093 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-09 14:57:29.560070204 +0000 UTC m=+97.283775093 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:57:21 crc kubenswrapper[5107]: E1209 14:57:21.560455 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:57:21 crc kubenswrapper[5107]: E1209 14:57:21.560484 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:57:21 crc kubenswrapper[5107]: E1209 14:57:21.560522 5107 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:57:21 crc kubenswrapper[5107]: E1209 14:57:21.560610 5107 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:57:21 crc kubenswrapper[5107]: E1209 14:57:21.560622 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-09 14:57:29.560602898 +0000 UTC m=+97.284307827 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:57:21 crc kubenswrapper[5107]: E1209 14:57:21.560784 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-09 14:57:29.560767892 +0000 UTC m=+97.284472971 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.622370 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.622417 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.622458 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.622478 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.622490 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:21Z","lastTransitionTime":"2025-12-09T14:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.661502 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f154303d-e14b-4854-8f94-194d0f338f98-metrics-certs\") pod \"network-metrics-daemon-6xk48\" (UID: \"f154303d-e14b-4854-8f94-194d0f338f98\") " pod="openshift-multus/network-metrics-daemon-6xk48" Dec 09 14:57:21 crc kubenswrapper[5107]: E1209 14:57:21.661777 5107 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:57:21 crc kubenswrapper[5107]: E1209 14:57:21.661921 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f154303d-e14b-4854-8f94-194d0f338f98-metrics-certs podName:f154303d-e14b-4854-8f94-194d0f338f98 nodeName:}" failed. No retries permitted until 2025-12-09 14:57:29.661883844 +0000 UTC m=+97.385588773 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f154303d-e14b-4854-8f94-194d0f338f98-metrics-certs") pod "network-metrics-daemon-6xk48" (UID: "f154303d-e14b-4854-8f94-194d0f338f98") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.725233 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.725544 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.725650 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.725750 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.725843 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:21Z","lastTransitionTime":"2025-12-09T14:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.817281 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.817329 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6xk48" Dec 09 14:57:21 crc kubenswrapper[5107]: E1209 14:57:21.817972 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6xk48" podUID="f154303d-e14b-4854-8f94-194d0f338f98" Dec 09 14:57:21 crc kubenswrapper[5107]: E1209 14:57:21.817740 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.817445 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:57:21 crc kubenswrapper[5107]: E1209 14:57:21.818131 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.818188 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:57:21 crc kubenswrapper[5107]: E1209 14:57:21.818402 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.832396 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.832476 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.832490 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.832510 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.832546 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:21Z","lastTransitionTime":"2025-12-09T14:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.935087 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.935154 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.935168 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.935189 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:21 crc kubenswrapper[5107]: I1209 14:57:21.935204 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:21Z","lastTransitionTime":"2025-12-09T14:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.038328 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.038458 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.038485 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.038521 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.038546 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:22Z","lastTransitionTime":"2025-12-09T14:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.141223 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.141304 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.141316 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.141601 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.141639 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:22Z","lastTransitionTime":"2025-12-09T14:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.245246 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.245317 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.245382 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.245417 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.245441 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:22Z","lastTransitionTime":"2025-12-09T14:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.348055 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.348137 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.348245 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.348278 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.348294 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:22Z","lastTransitionTime":"2025-12-09T14:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.450799 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.450865 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.450878 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.450894 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.450903 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:22Z","lastTransitionTime":"2025-12-09T14:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.553887 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.553960 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.553972 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.553990 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.554002 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:22Z","lastTransitionTime":"2025-12-09T14:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.656245 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.656294 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.656309 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.656328 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.656375 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:22Z","lastTransitionTime":"2025-12-09T14:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.758852 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.758948 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.758967 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.758989 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.759005 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:22Z","lastTransitionTime":"2025-12-09T14:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.827638 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17d90584-600e-4a2c-858d-ac2100e604ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://90d46289c8ca432b46932951c07a149bbafcc7176ad9849b227f5651b06a547e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://082babd1585ebe1400244484d45e6ad956bbf945dd0d12b138be216b4fba3b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://082babd1585ebe1400244484d45e6ad956bbf945dd0d12b138be216b4fba3b8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.842514 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6efe267-4ba8-4dba-82ff-0482a7faa4cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://904e724482192717fecf2153d5eb92a8c15cff7f211c05c086e0078ea27b2c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b2ee19d9b0c3ece70ee343d7e6d4270979d47c693eb4259e6765fa9cad9e1da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9deb047f092d62f6fbf741f0935ccfd2452b426c60e8583ed86fb57f9cd9c940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1acf3dcf34f39f83e79fea39ae6c0addc25edf7d233ff3106e740d1c52c1a5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acf3dcf34f39f83e79fea39ae6c0addc25edf7d233ff3106e740d1c52c1a5a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.853868 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.863532 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.863632 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.863659 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.863694 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.863722 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:22Z","lastTransitionTime":"2025-12-09T14:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.870665 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.888621 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b75d4675-9c37-47cf-8fa3-11097aa379ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9rjcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.904679 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-g7sv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"357946f5-b5ee-4739-a2c3-62beb5aedb57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qr2g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g7sv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.920442 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.933927 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-6zphj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"035458af-eba0-4241-bcac-4e11d6358b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbbvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbbvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-6zphj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.951892 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s44qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"468a62a3-c55d-40e0-bc1f-d01a979f017a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s44qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.966817 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50482b5b-33e4-4375-b4ec-a1c0ebe2c67b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://a7a842582313d50b3a1310b8e3656dc938751f1b63c38bd4eef8a2b6fae1c928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://169d737323fa61136fa0b652a15316035559871415b44a78987aa3ba6b800da0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ff8ee8fa9bdc13d78b24a7ed20b77f712c6d70019508ed464dea1e799dfc607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dcfe82d78bf385817437f96e0cd90ec7f0e152520e5d2289ea61ded6ec88972e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcfe82d78bf385817437f96e0cd90ec7f0e152520e5d2289ea61ded6ec88972e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T14:57:03Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1209 14:57:02.715968 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 14:57:02.716095 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1209 14:57:02.716826 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1531350067/tls.crt::/tmp/serving-cert-1531350067/tls.key\\\\\\\"\\\\nI1209 14:57:02.999617 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 14:57:03.002958 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 14:57:03.003001 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 14:57:03.003048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 14:57:03.003055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 14:57:03.007135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 14:57:03.007175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:57:03.007181 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:57:03.007186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 14:57:03.007189 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 14:57:03.007194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 14:57:03.007197 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 14:57:03.007189 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 14:57:03.010146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:57:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bdd7c59df86281d094a2a2e4476c5b9b6871fbd53bdb2c63b08a1822f3692aed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://68a813d217d22a3cbcc2432609817ec96bf285d26990afb1a4552cae8b0ffd2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a813d217d22a3cbcc2432609817ec96bf285d26990afb1a4552cae8b0ffd2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.968573 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.968633 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.968648 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.968671 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.968688 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:22Z","lastTransitionTime":"2025-12-09T14:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.981827 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:22 crc kubenswrapper[5107]: I1209 14:57:22.996318 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.008155 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.020780 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6xk48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f154303d-e14b-4854-8f94-194d0f338f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlxcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlxcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6xk48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.031300 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"902902bc-6dc6-4c5f-8e1b-9399b7c813c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9jq8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.050492 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6e4831-9436-4325-a9e7-527721a1b000\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://64d96edfb4f9b0ef8edd081ac1fa1a187a546788c54bb2dda482d17bcc17d37a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://065d21f3d7d7ddf58cbad5139235697b04e42f29e61ced7e1efba1de3555ae53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bf0a296ed891533100528ac09d9e9b54984dcbd0b82dc3b71f1fdbb6a16a436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4ea5d580221066f1f04f75db2bc68afd26f37ae4331f25097f74f72a526bb4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:58Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://4459987e648ce73b2166ed2c4bda9dcb074c0f7ed1087788e224ae7445163de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://31b13cf5f094ee3d16d72628b2be8256450a040cbf2aba08914cc67b41dd22fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b13cf5f094ee3d16d72628b2be8256450a040cbf2aba08914cc67b41dd22fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://3766645d2e3d50636dc90c9b7b1e6a272a668ddd8cd9b12ec054f6c6ac82bf52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3766645d2e3d50636dc90c9b7b1e6a272a668ddd8cd9b12ec054f6c6ac82bf52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://5cc408bdf5a7d3b3392428cdf3c7bcf13053a1c50c74b5f64015cbb071affce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cc408bdf5a7d3b3392428cdf3c7bcf13053a1c50c74b5f64015cbb071affce0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:56Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.062145 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059f16de-7ff9-4b6b-9730-03b18da8c48a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d7820e8e52e22f661c6cb580380ffd7f90da43e5aa390c88f1d73ea9a5afe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://770b2ea4f5cd64cd90ef891f9e2a1f60e78f0b284926af871933d7ca8898b9ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5da0b9bfefb1775c7984b5581acb0ff5ac968a5aa3abacc68f3a3cad90bed680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3b741aedb777807194ba434397b4d2a6063c1648282792a0f733d3b340b26774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.070466 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-hk6gf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aac14e3-5594-400a-a5f6-f00359244626\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gcnzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hk6gf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.071014 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.071048 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.071058 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.071072 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.071084 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:23Z","lastTransitionTime":"2025-12-09T14:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.079620 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gfdn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f91a655-0e59-4855-bb0c-acbc64e10ed7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq48q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gfdn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.174469 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.174522 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.174533 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.174549 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.174560 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:23Z","lastTransitionTime":"2025-12-09T14:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.277302 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.277399 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.277411 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.277431 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.277445 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:23Z","lastTransitionTime":"2025-12-09T14:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.380112 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.380192 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.380205 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.380224 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.380235 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:23Z","lastTransitionTime":"2025-12-09T14:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.482842 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.482897 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.482909 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.482928 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.482942 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:23Z","lastTransitionTime":"2025-12-09T14:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.585001 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.585044 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.585055 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.585071 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.585081 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:23Z","lastTransitionTime":"2025-12-09T14:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.687115 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.687171 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.687182 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.687209 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.687223 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:23Z","lastTransitionTime":"2025-12-09T14:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.790034 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.790092 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.790105 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.790123 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.790135 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:23Z","lastTransitionTime":"2025-12-09T14:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.817133 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.817185 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6xk48" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.817185 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:57:23 crc kubenswrapper[5107]: E1209 14:57:23.817287 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:57:23 crc kubenswrapper[5107]: E1209 14:57:23.817437 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.817669 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:57:23 crc kubenswrapper[5107]: E1209 14:57:23.817753 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:57:23 crc kubenswrapper[5107]: E1209 14:57:23.817642 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6xk48" podUID="f154303d-e14b-4854-8f94-194d0f338f98" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.892635 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.892886 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.892897 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.892916 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.892927 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:23Z","lastTransitionTime":"2025-12-09T14:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.995168 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.995226 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.995238 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.995255 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:23 crc kubenswrapper[5107]: I1209 14:57:23.995268 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:23Z","lastTransitionTime":"2025-12-09T14:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:24 crc kubenswrapper[5107]: I1209 14:57:24.098097 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:24 crc kubenswrapper[5107]: I1209 14:57:24.098147 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:24 crc kubenswrapper[5107]: I1209 14:57:24.098159 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:24 crc kubenswrapper[5107]: I1209 14:57:24.098184 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:24 crc kubenswrapper[5107]: I1209 14:57:24.098198 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:24Z","lastTransitionTime":"2025-12-09T14:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:24 crc kubenswrapper[5107]: I1209 14:57:24.201206 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:24 crc kubenswrapper[5107]: I1209 14:57:24.201280 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:24 crc kubenswrapper[5107]: I1209 14:57:24.201298 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:24 crc kubenswrapper[5107]: I1209 14:57:24.201325 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:24 crc kubenswrapper[5107]: I1209 14:57:24.201371 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:24Z","lastTransitionTime":"2025-12-09T14:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:24 crc kubenswrapper[5107]: I1209 14:57:24.304469 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:24 crc kubenswrapper[5107]: I1209 14:57:24.304517 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:24 crc kubenswrapper[5107]: I1209 14:57:24.304526 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:24 crc kubenswrapper[5107]: I1209 14:57:24.304544 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:24 crc kubenswrapper[5107]: I1209 14:57:24.304558 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:24Z","lastTransitionTime":"2025-12-09T14:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:24 crc kubenswrapper[5107]: I1209 14:57:24.407463 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:24 crc kubenswrapper[5107]: I1209 14:57:24.407545 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:24 crc kubenswrapper[5107]: I1209 14:57:24.407578 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:24 crc kubenswrapper[5107]: I1209 14:57:24.407617 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:24 crc kubenswrapper[5107]: I1209 14:57:24.407641 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:24Z","lastTransitionTime":"2025-12-09T14:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:24 crc kubenswrapper[5107]: I1209 14:57:24.510271 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:24 crc kubenswrapper[5107]: I1209 14:57:24.510440 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:24 crc kubenswrapper[5107]: I1209 14:57:24.510470 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:24 crc kubenswrapper[5107]: I1209 14:57:24.510503 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:24 crc kubenswrapper[5107]: I1209 14:57:24.510527 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:24Z","lastTransitionTime":"2025-12-09T14:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:24 crc kubenswrapper[5107]: I1209 14:57:24.613536 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:24 crc kubenswrapper[5107]: I1209 14:57:24.613621 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:24 crc kubenswrapper[5107]: I1209 14:57:24.613653 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:24 crc kubenswrapper[5107]: I1209 14:57:24.613685 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:24 crc kubenswrapper[5107]: I1209 14:57:24.613707 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:24Z","lastTransitionTime":"2025-12-09T14:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:24 crc kubenswrapper[5107]: I1209 14:57:24.716276 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:24 crc kubenswrapper[5107]: I1209 14:57:24.716396 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:24 crc kubenswrapper[5107]: I1209 14:57:24.716423 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:24 crc kubenswrapper[5107]: I1209 14:57:24.716452 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:24 crc kubenswrapper[5107]: I1209 14:57:24.716475 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:24Z","lastTransitionTime":"2025-12-09T14:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:24 crc kubenswrapper[5107]: I1209 14:57:24.819618 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:24 crc kubenswrapper[5107]: I1209 14:57:24.820075 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:24 crc kubenswrapper[5107]: I1209 14:57:24.820309 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:24 crc kubenswrapper[5107]: I1209 14:57:24.820613 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:24 crc kubenswrapper[5107]: I1209 14:57:24.820842 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:24Z","lastTransitionTime":"2025-12-09T14:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:24 crc kubenswrapper[5107]: I1209 14:57:24.924056 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:24 crc kubenswrapper[5107]: I1209 14:57:24.924120 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:24 crc kubenswrapper[5107]: I1209 14:57:24.924130 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:24 crc kubenswrapper[5107]: I1209 14:57:24.924149 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:24 crc kubenswrapper[5107]: I1209 14:57:24.924186 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:24Z","lastTransitionTime":"2025-12-09T14:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.027509 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.027630 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.027651 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.027680 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.027699 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:25Z","lastTransitionTime":"2025-12-09T14:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.130932 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.130986 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.131008 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.131029 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.131040 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:25Z","lastTransitionTime":"2025-12-09T14:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.234443 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.234509 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.234526 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.234548 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.234562 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:25Z","lastTransitionTime":"2025-12-09T14:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.337356 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.337406 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.337418 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.337436 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.337448 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:25Z","lastTransitionTime":"2025-12-09T14:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.439927 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.440040 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.440057 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.440085 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.440103 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:25Z","lastTransitionTime":"2025-12-09T14:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.542598 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.542676 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.542692 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.542711 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.542723 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:25Z","lastTransitionTime":"2025-12-09T14:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.645420 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.645473 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.645483 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.645501 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.645513 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:25Z","lastTransitionTime":"2025-12-09T14:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.748477 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.748543 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.748561 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.748581 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.748596 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:25Z","lastTransitionTime":"2025-12-09T14:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.817685 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:57:25 crc kubenswrapper[5107]: E1209 14:57:25.817886 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.818394 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.818596 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6xk48" Dec 09 14:57:25 crc kubenswrapper[5107]: E1209 14:57:25.818842 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.819050 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:57:25 crc kubenswrapper[5107]: E1209 14:57:25.819014 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6xk48" podUID="f154303d-e14b-4854-8f94-194d0f338f98" Dec 09 14:57:25 crc kubenswrapper[5107]: E1209 14:57:25.819271 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:57:25 crc kubenswrapper[5107]: E1209 14:57:25.820826 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:57:25 crc kubenswrapper[5107]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Dec 09 14:57:25 crc kubenswrapper[5107]: while [ true ]; Dec 09 14:57:25 crc kubenswrapper[5107]: do Dec 09 14:57:25 crc kubenswrapper[5107]: for f in $(ls /tmp/serviceca); do Dec 09 14:57:25 crc kubenswrapper[5107]: echo $f Dec 09 14:57:25 crc kubenswrapper[5107]: ca_file_path="/tmp/serviceca/${f}" Dec 09 14:57:25 crc kubenswrapper[5107]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Dec 09 14:57:25 crc kubenswrapper[5107]: reg_dir_path="/etc/docker/certs.d/${f}" Dec 09 14:57:25 crc kubenswrapper[5107]: if [ -e "${reg_dir_path}" ]; then Dec 09 14:57:25 crc kubenswrapper[5107]: cp -u $ca_file_path $reg_dir_path/ca.crt Dec 09 14:57:25 crc kubenswrapper[5107]: else Dec 09 14:57:25 crc kubenswrapper[5107]: mkdir $reg_dir_path Dec 09 14:57:25 crc kubenswrapper[5107]: cp $ca_file_path $reg_dir_path/ca.crt Dec 09 14:57:25 crc kubenswrapper[5107]: fi Dec 09 14:57:25 crc kubenswrapper[5107]: done Dec 09 14:57:25 crc kubenswrapper[5107]: for d in $(ls /etc/docker/certs.d); do Dec 09 14:57:25 crc kubenswrapper[5107]: echo $d Dec 09 14:57:25 crc kubenswrapper[5107]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Dec 09 14:57:25 crc kubenswrapper[5107]: reg_conf_path="/tmp/serviceca/${dp}" Dec 09 14:57:25 crc kubenswrapper[5107]: if [ ! -e "${reg_conf_path}" ]; then Dec 09 14:57:25 crc kubenswrapper[5107]: rm -rf /etc/docker/certs.d/$d Dec 09 14:57:25 crc kubenswrapper[5107]: fi Dec 09 14:57:25 crc kubenswrapper[5107]: done Dec 09 14:57:25 crc kubenswrapper[5107]: sleep 60 & wait ${!} Dec 09 14:57:25 crc kubenswrapper[5107]: done Dec 09 14:57:25 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sq48q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-gfdn8_openshift-image-registry(6f91a655-0e59-4855-bb0c-acbc64e10ed7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:57:25 crc kubenswrapper[5107]: > logger="UnhandledError" Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.821827 5107 scope.go:117] "RemoveContainer" containerID="dcfe82d78bf385817437f96e0cd90ec7f0e152520e5d2289ea61ded6ec88972e" Dec 09 14:57:25 crc kubenswrapper[5107]: E1209 14:57:25.822159 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-gfdn8" podUID="6f91a655-0e59-4855-bb0c-acbc64e10ed7" Dec 09 14:57:25 crc kubenswrapper[5107]: E1209 14:57:25.822185 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.851608 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.851678 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.851691 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.851842 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.851877 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:25Z","lastTransitionTime":"2025-12-09T14:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.954665 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.954734 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.954747 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.954774 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:25 crc kubenswrapper[5107]: I1209 14:57:25.954788 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:25Z","lastTransitionTime":"2025-12-09T14:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.058016 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.058095 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.058115 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.058141 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.058158 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:26Z","lastTransitionTime":"2025-12-09T14:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.161330 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.161421 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.161438 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.161463 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.161479 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:26Z","lastTransitionTime":"2025-12-09T14:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.264592 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.264657 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.264671 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.264689 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.264706 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:26Z","lastTransitionTime":"2025-12-09T14:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.367591 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.367685 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.367716 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.367748 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.367767 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:26Z","lastTransitionTime":"2025-12-09T14:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.470471 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.470553 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.470568 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.470590 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.470604 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:26Z","lastTransitionTime":"2025-12-09T14:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.573424 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.573505 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.573519 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.573537 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.573551 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:26Z","lastTransitionTime":"2025-12-09T14:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.675845 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.675898 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.675910 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.675932 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.675943 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:26Z","lastTransitionTime":"2025-12-09T14:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.779237 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.779300 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.779313 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.779354 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.779371 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:26Z","lastTransitionTime":"2025-12-09T14:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:26 crc kubenswrapper[5107]: E1209 14:57:26.821323 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:57:26 crc kubenswrapper[5107]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Dec 09 14:57:26 crc kubenswrapper[5107]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Dec 09 14:57:26 crc kubenswrapper[5107]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qr2g2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-g7sv4_openshift-multus(357946f5-b5ee-4739-a2c3-62beb5aedb57): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:57:26 crc kubenswrapper[5107]: > logger="UnhandledError" Dec 09 14:57:26 crc kubenswrapper[5107]: E1209 14:57:26.822603 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-g7sv4" podUID="357946f5-b5ee-4739-a2c3-62beb5aedb57" Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.881683 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.881773 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.881791 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.881810 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.881821 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:26Z","lastTransitionTime":"2025-12-09T14:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.985017 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.985089 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.985109 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.985139 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:26 crc kubenswrapper[5107]: I1209 14:57:26.985160 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:26Z","lastTransitionTime":"2025-12-09T14:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.088167 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.088329 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.088389 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.088418 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.088439 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:27Z","lastTransitionTime":"2025-12-09T14:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.192065 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.192117 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.192129 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.192148 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.192168 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:27Z","lastTransitionTime":"2025-12-09T14:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.294675 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.294755 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.294781 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.294811 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.294828 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:27Z","lastTransitionTime":"2025-12-09T14:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.397322 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.397417 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.397436 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.397456 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.397470 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:27Z","lastTransitionTime":"2025-12-09T14:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.499802 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.499863 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.499873 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.499891 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.499902 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:27Z","lastTransitionTime":"2025-12-09T14:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.602702 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.602762 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.602775 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.602795 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.602808 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:27Z","lastTransitionTime":"2025-12-09T14:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.705376 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.705753 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.705908 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.706067 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.706192 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:27Z","lastTransitionTime":"2025-12-09T14:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.809396 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.809471 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.809489 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.809516 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.809537 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:27Z","lastTransitionTime":"2025-12-09T14:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.817651 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:57:27 crc kubenswrapper[5107]: E1209 14:57:27.818035 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.817846 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6xk48" Dec 09 14:57:27 crc kubenswrapper[5107]: E1209 14:57:27.818358 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6xk48" podUID="f154303d-e14b-4854-8f94-194d0f338f98" Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.818387 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:57:27 crc kubenswrapper[5107]: E1209 14:57:27.818622 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.818409 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:57:27 crc kubenswrapper[5107]: E1209 14:57:27.818879 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:57:27 crc kubenswrapper[5107]: E1209 14:57:27.820105 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:57:27 crc kubenswrapper[5107]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Dec 09 14:57:27 crc kubenswrapper[5107]: set -uo pipefail Dec 09 14:57:27 crc kubenswrapper[5107]: Dec 09 14:57:27 crc kubenswrapper[5107]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Dec 09 14:57:27 crc kubenswrapper[5107]: Dec 09 14:57:27 crc kubenswrapper[5107]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Dec 09 14:57:27 crc kubenswrapper[5107]: HOSTS_FILE="/etc/hosts" Dec 09 14:57:27 crc kubenswrapper[5107]: TEMP_FILE="/tmp/hosts.tmp" Dec 09 14:57:27 crc kubenswrapper[5107]: Dec 09 14:57:27 crc kubenswrapper[5107]: IFS=', ' read -r -a services <<< "${SERVICES}" Dec 09 14:57:27 crc kubenswrapper[5107]: Dec 09 14:57:27 crc kubenswrapper[5107]: # Make a temporary file with the old hosts file's attributes. Dec 09 14:57:27 crc kubenswrapper[5107]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Dec 09 14:57:27 crc kubenswrapper[5107]: echo "Failed to preserve hosts file. Exiting." Dec 09 14:57:27 crc kubenswrapper[5107]: exit 1 Dec 09 14:57:27 crc kubenswrapper[5107]: fi Dec 09 14:57:27 crc kubenswrapper[5107]: Dec 09 14:57:27 crc kubenswrapper[5107]: while true; do Dec 09 14:57:27 crc kubenswrapper[5107]: declare -A svc_ips Dec 09 14:57:27 crc kubenswrapper[5107]: for svc in "${services[@]}"; do Dec 09 14:57:27 crc kubenswrapper[5107]: # Fetch service IP from cluster dns if present. We make several tries Dec 09 14:57:27 crc kubenswrapper[5107]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Dec 09 14:57:27 crc kubenswrapper[5107]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Dec 09 14:57:27 crc kubenswrapper[5107]: # support UDP loadbalancers and require reaching DNS through TCP. Dec 09 14:57:27 crc kubenswrapper[5107]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 09 14:57:27 crc kubenswrapper[5107]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 09 14:57:27 crc kubenswrapper[5107]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 09 14:57:27 crc kubenswrapper[5107]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Dec 09 14:57:27 crc kubenswrapper[5107]: for i in ${!cmds[*]} Dec 09 14:57:27 crc kubenswrapper[5107]: do Dec 09 14:57:27 crc kubenswrapper[5107]: ips=($(eval "${cmds[i]}")) Dec 09 14:57:27 crc kubenswrapper[5107]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Dec 09 14:57:27 crc kubenswrapper[5107]: svc_ips["${svc}"]="${ips[@]}" Dec 09 14:57:27 crc kubenswrapper[5107]: break Dec 09 14:57:27 crc kubenswrapper[5107]: fi Dec 09 14:57:27 crc kubenswrapper[5107]: done Dec 09 14:57:27 crc kubenswrapper[5107]: done Dec 09 14:57:27 crc kubenswrapper[5107]: Dec 09 14:57:27 crc kubenswrapper[5107]: # Update /etc/hosts only if we get valid service IPs Dec 09 14:57:27 crc kubenswrapper[5107]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Dec 09 14:57:27 crc kubenswrapper[5107]: # Stale entries could exist in /etc/hosts if the service is deleted Dec 09 14:57:27 crc kubenswrapper[5107]: if [[ -n "${svc_ips[*]-}" ]]; then Dec 09 14:57:27 crc kubenswrapper[5107]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Dec 09 14:57:27 crc kubenswrapper[5107]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Dec 09 14:57:27 crc kubenswrapper[5107]: # Only continue rebuilding the hosts entries if its original content is preserved Dec 09 14:57:27 crc kubenswrapper[5107]: sleep 60 & wait Dec 09 14:57:27 crc kubenswrapper[5107]: continue Dec 09 14:57:27 crc kubenswrapper[5107]: fi Dec 09 14:57:27 crc kubenswrapper[5107]: Dec 09 14:57:27 crc kubenswrapper[5107]: # Append resolver entries for services Dec 09 14:57:27 crc kubenswrapper[5107]: rc=0 Dec 09 14:57:27 crc kubenswrapper[5107]: for svc in "${!svc_ips[@]}"; do Dec 09 14:57:27 crc kubenswrapper[5107]: for ip in ${svc_ips[${svc}]}; do Dec 09 14:57:27 crc kubenswrapper[5107]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Dec 09 14:57:27 crc kubenswrapper[5107]: done Dec 09 14:57:27 crc kubenswrapper[5107]: done Dec 09 14:57:27 crc kubenswrapper[5107]: if [[ $rc -ne 0 ]]; then Dec 09 14:57:27 crc kubenswrapper[5107]: sleep 60 & wait Dec 09 14:57:27 crc kubenswrapper[5107]: continue Dec 09 14:57:27 crc kubenswrapper[5107]: fi Dec 09 14:57:27 crc kubenswrapper[5107]: Dec 09 14:57:27 crc kubenswrapper[5107]: Dec 09 14:57:27 crc kubenswrapper[5107]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Dec 09 14:57:27 crc kubenswrapper[5107]: # Replace /etc/hosts with our modified version if needed Dec 09 14:57:27 crc kubenswrapper[5107]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Dec 09 14:57:27 crc kubenswrapper[5107]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Dec 09 14:57:27 crc kubenswrapper[5107]: fi Dec 09 14:57:27 crc kubenswrapper[5107]: sleep 60 & wait Dec 09 14:57:27 crc kubenswrapper[5107]: unset svc_ips Dec 09 14:57:27 crc kubenswrapper[5107]: done Dec 09 14:57:27 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gcnzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-hk6gf_openshift-dns(6aac14e3-5594-400a-a5f6-f00359244626): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:57:27 crc kubenswrapper[5107]: > logger="UnhandledError" Dec 09 14:57:27 crc kubenswrapper[5107]: E1209 14:57:27.821499 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-hk6gf" podUID="6aac14e3-5594-400a-a5f6-f00359244626" Dec 09 14:57:27 crc kubenswrapper[5107]: E1209 14:57:27.821820 5107 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 09 14:57:27 crc kubenswrapper[5107]: E1209 14:57:27.822215 5107 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4hsll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-s44qp_openshift-multus(468a62a3-c55d-40e0-bc1f-d01a979f017a): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 09 14:57:27 crc kubenswrapper[5107]: E1209 14:57:27.822604 5107 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8snpj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-9jq8t_openshift-machine-config-operator(902902bc-6dc6-4c5f-8e1b-9399b7c813c7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 09 14:57:27 crc kubenswrapper[5107]: E1209 14:57:27.823008 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Dec 09 14:57:27 crc kubenswrapper[5107]: E1209 14:57:27.824457 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-s44qp" podUID="468a62a3-c55d-40e0-bc1f-d01a979f017a" Dec 09 14:57:27 crc kubenswrapper[5107]: E1209 14:57:27.826396 5107 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8snpj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-9jq8t_openshift-machine-config-operator(902902bc-6dc6-4c5f-8e1b-9399b7c813c7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 09 14:57:27 crc kubenswrapper[5107]: E1209 14:57:27.827638 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" podUID="902902bc-6dc6-4c5f-8e1b-9399b7c813c7" Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.912823 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.912880 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.912892 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.912912 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:27 crc kubenswrapper[5107]: I1209 14:57:27.912924 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:27Z","lastTransitionTime":"2025-12-09T14:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.015358 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.015423 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.015441 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.015465 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.015481 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:28Z","lastTransitionTime":"2025-12-09T14:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.117983 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.118044 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.118058 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.118082 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.118100 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:28Z","lastTransitionTime":"2025-12-09T14:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.220210 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.220266 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.220276 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.220331 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.220361 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:28Z","lastTransitionTime":"2025-12-09T14:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.323629 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.323693 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.323703 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.323722 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.323735 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:28Z","lastTransitionTime":"2025-12-09T14:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.426368 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.426432 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.426442 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.426463 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.426474 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:28Z","lastTransitionTime":"2025-12-09T14:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.529485 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.529547 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.529557 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.529577 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.529588 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:28Z","lastTransitionTime":"2025-12-09T14:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.631871 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.631951 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.631972 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.632002 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.632025 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:28Z","lastTransitionTime":"2025-12-09T14:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.734699 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.734761 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.734773 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.734791 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.734805 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:28Z","lastTransitionTime":"2025-12-09T14:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:28 crc kubenswrapper[5107]: E1209 14:57:28.819426 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:57:28 crc kubenswrapper[5107]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Dec 09 14:57:28 crc kubenswrapper[5107]: apiVersion: v1 Dec 09 14:57:28 crc kubenswrapper[5107]: clusters: Dec 09 14:57:28 crc kubenswrapper[5107]: - cluster: Dec 09 14:57:28 crc kubenswrapper[5107]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Dec 09 14:57:28 crc kubenswrapper[5107]: server: https://api-int.crc.testing:6443 Dec 09 14:57:28 crc kubenswrapper[5107]: name: default-cluster Dec 09 14:57:28 crc kubenswrapper[5107]: contexts: Dec 09 14:57:28 crc kubenswrapper[5107]: - context: Dec 09 14:57:28 crc kubenswrapper[5107]: cluster: default-cluster Dec 09 14:57:28 crc kubenswrapper[5107]: namespace: default Dec 09 14:57:28 crc kubenswrapper[5107]: user: default-auth Dec 09 14:57:28 crc kubenswrapper[5107]: name: default-context Dec 09 14:57:28 crc kubenswrapper[5107]: current-context: default-context Dec 09 14:57:28 crc kubenswrapper[5107]: kind: Config Dec 09 14:57:28 crc kubenswrapper[5107]: preferences: {} Dec 09 14:57:28 crc kubenswrapper[5107]: users: Dec 09 14:57:28 crc kubenswrapper[5107]: - name: default-auth Dec 09 14:57:28 crc kubenswrapper[5107]: user: Dec 09 14:57:28 crc kubenswrapper[5107]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 09 14:57:28 crc kubenswrapper[5107]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 09 14:57:28 crc kubenswrapper[5107]: EOF Dec 09 14:57:28 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ljp8p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-9rjcr_openshift-ovn-kubernetes(b75d4675-9c37-47cf-8fa3-11097aa379ca): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:57:28 crc kubenswrapper[5107]: > logger="UnhandledError" Dec 09 14:57:28 crc kubenswrapper[5107]: E1209 14:57:28.820799 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" podUID="b75d4675-9c37-47cf-8fa3-11097aa379ca" Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.836809 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.837059 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.837179 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.837263 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.837350 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:28Z","lastTransitionTime":"2025-12-09T14:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.940439 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.940531 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.940544 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.940567 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:28 crc kubenswrapper[5107]: I1209 14:57:28.940580 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:28Z","lastTransitionTime":"2025-12-09T14:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.043225 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.043574 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.043646 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.043835 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.043953 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:29Z","lastTransitionTime":"2025-12-09T14:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.146878 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.146922 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.146932 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.146950 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.146959 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:29Z","lastTransitionTime":"2025-12-09T14:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.249846 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.250527 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.250594 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.250747 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.250836 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:29Z","lastTransitionTime":"2025-12-09T14:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.353515 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.353573 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.353585 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.353601 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.353612 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:29Z","lastTransitionTime":"2025-12-09T14:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.456780 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.456842 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.456859 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.456880 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.456894 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:29Z","lastTransitionTime":"2025-12-09T14:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.559728 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.559786 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.559796 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.559817 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.559829 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:29Z","lastTransitionTime":"2025-12-09T14:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.659105 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.659265 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.659291 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:57:29 crc kubenswrapper[5107]: E1209 14:57:29.659433 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:57:45.659374791 +0000 UTC m=+113.383079680 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:57:29 crc kubenswrapper[5107]: E1209 14:57:29.659493 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:57:29 crc kubenswrapper[5107]: E1209 14:57:29.659461 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:57:29 crc kubenswrapper[5107]: E1209 14:57:29.659532 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:57:29 crc kubenswrapper[5107]: E1209 14:57:29.659546 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:57:29 crc kubenswrapper[5107]: E1209 14:57:29.659551 5107 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:57:29 crc kubenswrapper[5107]: E1209 14:57:29.659559 5107 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.659543 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:57:29 crc kubenswrapper[5107]: E1209 14:57:29.659618 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-09 14:57:45.659602468 +0000 UTC m=+113.383307357 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:57:29 crc kubenswrapper[5107]: E1209 14:57:29.659663 5107 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:57:29 crc kubenswrapper[5107]: E1209 14:57:29.659752 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-09 14:57:45.65967229 +0000 UTC m=+113.383377199 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:57:29 crc kubenswrapper[5107]: E1209 14:57:29.659949 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-09 14:57:45.659849815 +0000 UTC m=+113.383554724 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.660132 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:57:29 crc kubenswrapper[5107]: E1209 14:57:29.660493 5107 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:57:29 crc kubenswrapper[5107]: E1209 14:57:29.660602 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-09 14:57:45.660587984 +0000 UTC m=+113.384292883 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.661715 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.661753 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.661765 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.661787 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.661800 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:29Z","lastTransitionTime":"2025-12-09T14:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.761822 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f154303d-e14b-4854-8f94-194d0f338f98-metrics-certs\") pod \"network-metrics-daemon-6xk48\" (UID: \"f154303d-e14b-4854-8f94-194d0f338f98\") " pod="openshift-multus/network-metrics-daemon-6xk48" Dec 09 14:57:29 crc kubenswrapper[5107]: E1209 14:57:29.762044 5107 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:57:29 crc kubenswrapper[5107]: E1209 14:57:29.763281 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f154303d-e14b-4854-8f94-194d0f338f98-metrics-certs podName:f154303d-e14b-4854-8f94-194d0f338f98 nodeName:}" failed. No retries permitted until 2025-12-09 14:57:45.763254488 +0000 UTC m=+113.486959377 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f154303d-e14b-4854-8f94-194d0f338f98-metrics-certs") pod "network-metrics-daemon-6xk48" (UID: "f154303d-e14b-4854-8f94-194d0f338f98") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.764015 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.764064 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.764073 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.764090 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.764100 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:29Z","lastTransitionTime":"2025-12-09T14:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.817373 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6xk48" Dec 09 14:57:29 crc kubenswrapper[5107]: E1209 14:57:29.817600 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6xk48" podUID="f154303d-e14b-4854-8f94-194d0f338f98" Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.818004 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:57:29 crc kubenswrapper[5107]: E1209 14:57:29.818094 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.818154 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:57:29 crc kubenswrapper[5107]: E1209 14:57:29.818216 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.818257 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:57:29 crc kubenswrapper[5107]: E1209 14:57:29.818318 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:57:29 crc kubenswrapper[5107]: E1209 14:57:29.819316 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:57:29 crc kubenswrapper[5107]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 09 14:57:29 crc kubenswrapper[5107]: if [[ -f "/env/_master" ]]; then Dec 09 14:57:29 crc kubenswrapper[5107]: set -o allexport Dec 09 14:57:29 crc kubenswrapper[5107]: source "/env/_master" Dec 09 14:57:29 crc kubenswrapper[5107]: set +o allexport Dec 09 14:57:29 crc kubenswrapper[5107]: fi Dec 09 14:57:29 crc kubenswrapper[5107]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Dec 09 14:57:29 crc kubenswrapper[5107]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Dec 09 14:57:29 crc kubenswrapper[5107]: ho_enable="--enable-hybrid-overlay" Dec 09 14:57:29 crc kubenswrapper[5107]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Dec 09 14:57:29 crc kubenswrapper[5107]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Dec 09 14:57:29 crc kubenswrapper[5107]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Dec 09 14:57:29 crc kubenswrapper[5107]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 09 14:57:29 crc kubenswrapper[5107]: --webhook-cert-dir="/etc/webhook-cert" \ Dec 09 14:57:29 crc kubenswrapper[5107]: --webhook-host=127.0.0.1 \ Dec 09 14:57:29 crc kubenswrapper[5107]: --webhook-port=9743 \ Dec 09 14:57:29 crc kubenswrapper[5107]: ${ho_enable} \ Dec 09 14:57:29 crc kubenswrapper[5107]: --enable-interconnect \ Dec 09 14:57:29 crc kubenswrapper[5107]: --disable-approver \ Dec 09 14:57:29 crc kubenswrapper[5107]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Dec 09 14:57:29 crc kubenswrapper[5107]: --wait-for-kubernetes-api=200s \ Dec 09 14:57:29 crc kubenswrapper[5107]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Dec 09 14:57:29 crc kubenswrapper[5107]: --loglevel="${LOGLEVEL}" Dec 09 14:57:29 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:57:29 crc kubenswrapper[5107]: > logger="UnhandledError" Dec 09 14:57:29 crc kubenswrapper[5107]: E1209 14:57:29.819699 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:57:29 crc kubenswrapper[5107]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Dec 09 14:57:29 crc kubenswrapper[5107]: set -o allexport Dec 09 14:57:29 crc kubenswrapper[5107]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Dec 09 14:57:29 crc kubenswrapper[5107]: source /etc/kubernetes/apiserver-url.env Dec 09 14:57:29 crc kubenswrapper[5107]: else Dec 09 14:57:29 crc kubenswrapper[5107]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Dec 09 14:57:29 crc kubenswrapper[5107]: exit 1 Dec 09 14:57:29 crc kubenswrapper[5107]: fi Dec 09 14:57:29 crc kubenswrapper[5107]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Dec 09 14:57:29 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:57:29 crc kubenswrapper[5107]: > logger="UnhandledError" Dec 09 14:57:29 crc kubenswrapper[5107]: E1209 14:57:29.821075 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Dec 09 14:57:29 crc kubenswrapper[5107]: E1209 14:57:29.821461 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:57:29 crc kubenswrapper[5107]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 09 14:57:29 crc kubenswrapper[5107]: if [[ -f "/env/_master" ]]; then Dec 09 14:57:29 crc kubenswrapper[5107]: set -o allexport Dec 09 14:57:29 crc kubenswrapper[5107]: source "/env/_master" Dec 09 14:57:29 crc kubenswrapper[5107]: set +o allexport Dec 09 14:57:29 crc kubenswrapper[5107]: fi Dec 09 14:57:29 crc kubenswrapper[5107]: Dec 09 14:57:29 crc kubenswrapper[5107]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Dec 09 14:57:29 crc kubenswrapper[5107]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 09 14:57:29 crc kubenswrapper[5107]: --disable-webhook \ Dec 09 14:57:29 crc kubenswrapper[5107]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Dec 09 14:57:29 crc kubenswrapper[5107]: --loglevel="${LOGLEVEL}" Dec 09 14:57:29 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:57:29 crc kubenswrapper[5107]: > logger="UnhandledError" Dec 09 14:57:29 crc kubenswrapper[5107]: E1209 14:57:29.822652 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Dec 09 14:57:29 crc kubenswrapper[5107]: E1209 14:57:29.825784 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:57:29 crc kubenswrapper[5107]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Dec 09 14:57:29 crc kubenswrapper[5107]: set -euo pipefail Dec 09 14:57:29 crc kubenswrapper[5107]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Dec 09 14:57:29 crc kubenswrapper[5107]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Dec 09 14:57:29 crc kubenswrapper[5107]: # As the secret mount is optional we must wait for the files to be present. Dec 09 14:57:29 crc kubenswrapper[5107]: # The service is created in monitor.yaml and this is created in sdn.yaml. Dec 09 14:57:29 crc kubenswrapper[5107]: TS=$(date +%s) Dec 09 14:57:29 crc kubenswrapper[5107]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Dec 09 14:57:29 crc kubenswrapper[5107]: HAS_LOGGED_INFO=0 Dec 09 14:57:29 crc kubenswrapper[5107]: Dec 09 14:57:29 crc kubenswrapper[5107]: log_missing_certs(){ Dec 09 14:57:29 crc kubenswrapper[5107]: CUR_TS=$(date +%s) Dec 09 14:57:29 crc kubenswrapper[5107]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Dec 09 14:57:29 crc kubenswrapper[5107]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Dec 09 14:57:29 crc kubenswrapper[5107]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Dec 09 14:57:29 crc kubenswrapper[5107]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Dec 09 14:57:29 crc kubenswrapper[5107]: HAS_LOGGED_INFO=1 Dec 09 14:57:29 crc kubenswrapper[5107]: fi Dec 09 14:57:29 crc kubenswrapper[5107]: } Dec 09 14:57:29 crc kubenswrapper[5107]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Dec 09 14:57:29 crc kubenswrapper[5107]: log_missing_certs Dec 09 14:57:29 crc kubenswrapper[5107]: sleep 5 Dec 09 14:57:29 crc kubenswrapper[5107]: done Dec 09 14:57:29 crc kubenswrapper[5107]: Dec 09 14:57:29 crc kubenswrapper[5107]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Dec 09 14:57:29 crc kubenswrapper[5107]: exec /usr/bin/kube-rbac-proxy \ Dec 09 14:57:29 crc kubenswrapper[5107]: --logtostderr \ Dec 09 14:57:29 crc kubenswrapper[5107]: --secure-listen-address=:9108 \ Dec 09 14:57:29 crc kubenswrapper[5107]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Dec 09 14:57:29 crc kubenswrapper[5107]: --upstream=http://127.0.0.1:29108/ \ Dec 09 14:57:29 crc kubenswrapper[5107]: --tls-private-key-file=${TLS_PK} \ Dec 09 14:57:29 crc kubenswrapper[5107]: --tls-cert-file=${TLS_CERT} Dec 09 14:57:29 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jbbvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-6zphj_openshift-ovn-kubernetes(035458af-eba0-4241-bcac-4e11d6358b21): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:57:29 crc kubenswrapper[5107]: > logger="UnhandledError" Dec 09 14:57:29 crc kubenswrapper[5107]: E1209 14:57:29.830033 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:57:29 crc kubenswrapper[5107]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 09 14:57:29 crc kubenswrapper[5107]: if [[ -f "/env/_master" ]]; then Dec 09 14:57:29 crc kubenswrapper[5107]: set -o allexport Dec 09 14:57:29 crc kubenswrapper[5107]: source "/env/_master" Dec 09 14:57:29 crc kubenswrapper[5107]: set +o allexport Dec 09 14:57:29 crc kubenswrapper[5107]: fi Dec 09 14:57:29 crc kubenswrapper[5107]: Dec 09 14:57:29 crc kubenswrapper[5107]: ovn_v4_join_subnet_opt= Dec 09 14:57:29 crc kubenswrapper[5107]: if [[ "" != "" ]]; then Dec 09 14:57:29 crc kubenswrapper[5107]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Dec 09 14:57:29 crc kubenswrapper[5107]: fi Dec 09 14:57:29 crc kubenswrapper[5107]: ovn_v6_join_subnet_opt= Dec 09 14:57:29 crc kubenswrapper[5107]: if [[ "" != "" ]]; then Dec 09 14:57:29 crc kubenswrapper[5107]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Dec 09 14:57:29 crc kubenswrapper[5107]: fi Dec 09 14:57:29 crc kubenswrapper[5107]: Dec 09 14:57:29 crc kubenswrapper[5107]: ovn_v4_transit_switch_subnet_opt= Dec 09 14:57:29 crc kubenswrapper[5107]: if [[ "" != "" ]]; then Dec 09 14:57:29 crc kubenswrapper[5107]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Dec 09 14:57:29 crc kubenswrapper[5107]: fi Dec 09 14:57:29 crc kubenswrapper[5107]: ovn_v6_transit_switch_subnet_opt= Dec 09 14:57:29 crc kubenswrapper[5107]: if [[ "" != "" ]]; then Dec 09 14:57:29 crc kubenswrapper[5107]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Dec 09 14:57:29 crc kubenswrapper[5107]: fi Dec 09 14:57:29 crc kubenswrapper[5107]: Dec 09 14:57:29 crc kubenswrapper[5107]: dns_name_resolver_enabled_flag= Dec 09 14:57:29 crc kubenswrapper[5107]: if [[ "false" == "true" ]]; then Dec 09 14:57:29 crc kubenswrapper[5107]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Dec 09 14:57:29 crc kubenswrapper[5107]: fi Dec 09 14:57:29 crc kubenswrapper[5107]: Dec 09 14:57:29 crc kubenswrapper[5107]: persistent_ips_enabled_flag="--enable-persistent-ips" Dec 09 14:57:29 crc kubenswrapper[5107]: Dec 09 14:57:29 crc kubenswrapper[5107]: # This is needed so that converting clusters from GA to TP Dec 09 14:57:29 crc kubenswrapper[5107]: # will rollout control plane pods as well Dec 09 14:57:29 crc kubenswrapper[5107]: network_segmentation_enabled_flag= Dec 09 14:57:29 crc kubenswrapper[5107]: multi_network_enabled_flag= Dec 09 14:57:29 crc kubenswrapper[5107]: if [[ "true" == "true" ]]; then Dec 09 14:57:29 crc kubenswrapper[5107]: multi_network_enabled_flag="--enable-multi-network" Dec 09 14:57:29 crc kubenswrapper[5107]: fi Dec 09 14:57:29 crc kubenswrapper[5107]: if [[ "true" == "true" ]]; then Dec 09 14:57:29 crc kubenswrapper[5107]: if [[ "true" != "true" ]]; then Dec 09 14:57:29 crc kubenswrapper[5107]: multi_network_enabled_flag="--enable-multi-network" Dec 09 14:57:29 crc kubenswrapper[5107]: fi Dec 09 14:57:29 crc kubenswrapper[5107]: network_segmentation_enabled_flag="--enable-network-segmentation" Dec 09 14:57:29 crc kubenswrapper[5107]: fi Dec 09 14:57:29 crc kubenswrapper[5107]: Dec 09 14:57:29 crc kubenswrapper[5107]: route_advertisements_enable_flag= Dec 09 14:57:29 crc kubenswrapper[5107]: if [[ "false" == "true" ]]; then Dec 09 14:57:29 crc kubenswrapper[5107]: route_advertisements_enable_flag="--enable-route-advertisements" Dec 09 14:57:29 crc kubenswrapper[5107]: fi Dec 09 14:57:29 crc kubenswrapper[5107]: Dec 09 14:57:29 crc kubenswrapper[5107]: preconfigured_udn_addresses_enable_flag= Dec 09 14:57:29 crc kubenswrapper[5107]: if [[ "false" == "true" ]]; then Dec 09 14:57:29 crc kubenswrapper[5107]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Dec 09 14:57:29 crc kubenswrapper[5107]: fi Dec 09 14:57:29 crc kubenswrapper[5107]: Dec 09 14:57:29 crc kubenswrapper[5107]: # Enable multi-network policy if configured (control-plane always full mode) Dec 09 14:57:29 crc kubenswrapper[5107]: multi_network_policy_enabled_flag= Dec 09 14:57:29 crc kubenswrapper[5107]: if [[ "false" == "true" ]]; then Dec 09 14:57:29 crc kubenswrapper[5107]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Dec 09 14:57:29 crc kubenswrapper[5107]: fi Dec 09 14:57:29 crc kubenswrapper[5107]: Dec 09 14:57:29 crc kubenswrapper[5107]: # Enable admin network policy if configured (control-plane always full mode) Dec 09 14:57:29 crc kubenswrapper[5107]: admin_network_policy_enabled_flag= Dec 09 14:57:29 crc kubenswrapper[5107]: if [[ "true" == "true" ]]; then Dec 09 14:57:29 crc kubenswrapper[5107]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Dec 09 14:57:29 crc kubenswrapper[5107]: fi Dec 09 14:57:29 crc kubenswrapper[5107]: Dec 09 14:57:29 crc kubenswrapper[5107]: if [ "shared" == "shared" ]; then Dec 09 14:57:29 crc kubenswrapper[5107]: gateway_mode_flags="--gateway-mode shared" Dec 09 14:57:29 crc kubenswrapper[5107]: elif [ "shared" == "local" ]; then Dec 09 14:57:29 crc kubenswrapper[5107]: gateway_mode_flags="--gateway-mode local" Dec 09 14:57:29 crc kubenswrapper[5107]: else Dec 09 14:57:29 crc kubenswrapper[5107]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Dec 09 14:57:29 crc kubenswrapper[5107]: exit 1 Dec 09 14:57:29 crc kubenswrapper[5107]: fi Dec 09 14:57:29 crc kubenswrapper[5107]: Dec 09 14:57:29 crc kubenswrapper[5107]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Dec 09 14:57:29 crc kubenswrapper[5107]: exec /usr/bin/ovnkube \ Dec 09 14:57:29 crc kubenswrapper[5107]: --enable-interconnect \ Dec 09 14:57:29 crc kubenswrapper[5107]: --init-cluster-manager "${K8S_NODE}" \ Dec 09 14:57:29 crc kubenswrapper[5107]: --config-file=/run/ovnkube-config/ovnkube.conf \ Dec 09 14:57:29 crc kubenswrapper[5107]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Dec 09 14:57:29 crc kubenswrapper[5107]: --metrics-bind-address "127.0.0.1:29108" \ Dec 09 14:57:29 crc kubenswrapper[5107]: --metrics-enable-pprof \ Dec 09 14:57:29 crc kubenswrapper[5107]: --metrics-enable-config-duration \ Dec 09 14:57:29 crc kubenswrapper[5107]: ${ovn_v4_join_subnet_opt} \ Dec 09 14:57:29 crc kubenswrapper[5107]: ${ovn_v6_join_subnet_opt} \ Dec 09 14:57:29 crc kubenswrapper[5107]: ${ovn_v4_transit_switch_subnet_opt} \ Dec 09 14:57:29 crc kubenswrapper[5107]: ${ovn_v6_transit_switch_subnet_opt} \ Dec 09 14:57:29 crc kubenswrapper[5107]: ${dns_name_resolver_enabled_flag} \ Dec 09 14:57:29 crc kubenswrapper[5107]: ${persistent_ips_enabled_flag} \ Dec 09 14:57:29 crc kubenswrapper[5107]: ${multi_network_enabled_flag} \ Dec 09 14:57:29 crc kubenswrapper[5107]: ${network_segmentation_enabled_flag} \ Dec 09 14:57:29 crc kubenswrapper[5107]: ${gateway_mode_flags} \ Dec 09 14:57:29 crc kubenswrapper[5107]: ${route_advertisements_enable_flag} \ Dec 09 14:57:29 crc kubenswrapper[5107]: ${preconfigured_udn_addresses_enable_flag} \ Dec 09 14:57:29 crc kubenswrapper[5107]: --enable-egress-ip=true \ Dec 09 14:57:29 crc kubenswrapper[5107]: --enable-egress-firewall=true \ Dec 09 14:57:29 crc kubenswrapper[5107]: --enable-egress-qos=true \ Dec 09 14:57:29 crc kubenswrapper[5107]: --enable-egress-service=true \ Dec 09 14:57:29 crc kubenswrapper[5107]: --enable-multicast \ Dec 09 14:57:29 crc kubenswrapper[5107]: --enable-multi-external-gateway=true \ Dec 09 14:57:29 crc kubenswrapper[5107]: ${multi_network_policy_enabled_flag} \ Dec 09 14:57:29 crc kubenswrapper[5107]: ${admin_network_policy_enabled_flag} Dec 09 14:57:29 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jbbvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-6zphj_openshift-ovn-kubernetes(035458af-eba0-4241-bcac-4e11d6358b21): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:57:29 crc kubenswrapper[5107]: > logger="UnhandledError" Dec 09 14:57:29 crc kubenswrapper[5107]: E1209 14:57:29.831353 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-6zphj" podUID="035458af-eba0-4241-bcac-4e11d6358b21" Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.867861 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.867929 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.867948 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.867974 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.867993 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:29Z","lastTransitionTime":"2025-12-09T14:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.970416 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.970748 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.970812 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.970909 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:29 crc kubenswrapper[5107]: I1209 14:57:29.971077 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:29Z","lastTransitionTime":"2025-12-09T14:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:30 crc kubenswrapper[5107]: I1209 14:57:30.073844 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:30 crc kubenswrapper[5107]: I1209 14:57:30.073904 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:30 crc kubenswrapper[5107]: I1209 14:57:30.073916 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:30 crc kubenswrapper[5107]: I1209 14:57:30.073937 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:30 crc kubenswrapper[5107]: I1209 14:57:30.073949 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:30Z","lastTransitionTime":"2025-12-09T14:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:30 crc kubenswrapper[5107]: I1209 14:57:30.177252 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:30 crc kubenswrapper[5107]: I1209 14:57:30.177765 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:30 crc kubenswrapper[5107]: I1209 14:57:30.177859 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:30 crc kubenswrapper[5107]: I1209 14:57:30.177938 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:30 crc kubenswrapper[5107]: I1209 14:57:30.178005 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:30Z","lastTransitionTime":"2025-12-09T14:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:30 crc kubenswrapper[5107]: I1209 14:57:30.281705 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:30 crc kubenswrapper[5107]: I1209 14:57:30.281780 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:30 crc kubenswrapper[5107]: I1209 14:57:30.281796 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:30 crc kubenswrapper[5107]: I1209 14:57:30.281822 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:30 crc kubenswrapper[5107]: I1209 14:57:30.281841 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:30Z","lastTransitionTime":"2025-12-09T14:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:30 crc kubenswrapper[5107]: I1209 14:57:30.384142 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:30 crc kubenswrapper[5107]: I1209 14:57:30.384182 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:30 crc kubenswrapper[5107]: I1209 14:57:30.384200 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:30 crc kubenswrapper[5107]: I1209 14:57:30.384218 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:30 crc kubenswrapper[5107]: I1209 14:57:30.384229 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:30Z","lastTransitionTime":"2025-12-09T14:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:30 crc kubenswrapper[5107]: I1209 14:57:30.485839 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:30 crc kubenswrapper[5107]: I1209 14:57:30.486104 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:30 crc kubenswrapper[5107]: I1209 14:57:30.486175 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:30 crc kubenswrapper[5107]: I1209 14:57:30.486246 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:30 crc kubenswrapper[5107]: I1209 14:57:30.486310 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:30Z","lastTransitionTime":"2025-12-09T14:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:30 crc kubenswrapper[5107]: I1209 14:57:30.589565 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:30 crc kubenswrapper[5107]: I1209 14:57:30.589896 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:30 crc kubenswrapper[5107]: I1209 14:57:30.589975 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:30 crc kubenswrapper[5107]: I1209 14:57:30.590052 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:30 crc kubenswrapper[5107]: I1209 14:57:30.590125 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:30Z","lastTransitionTime":"2025-12-09T14:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:30 crc kubenswrapper[5107]: I1209 14:57:30.692119 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:30 crc kubenswrapper[5107]: I1209 14:57:30.692208 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:30 crc kubenswrapper[5107]: I1209 14:57:30.692223 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:30 crc kubenswrapper[5107]: I1209 14:57:30.692243 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:30 crc kubenswrapper[5107]: I1209 14:57:30.692260 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:30Z","lastTransitionTime":"2025-12-09T14:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:30 crc kubenswrapper[5107]: I1209 14:57:30.794507 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:30 crc kubenswrapper[5107]: I1209 14:57:30.794557 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:30 crc kubenswrapper[5107]: I1209 14:57:30.794570 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:30 crc kubenswrapper[5107]: I1209 14:57:30.794589 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:30 crc kubenswrapper[5107]: I1209 14:57:30.794601 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:30Z","lastTransitionTime":"2025-12-09T14:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:30 crc kubenswrapper[5107]: I1209 14:57:30.897369 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:30 crc kubenswrapper[5107]: I1209 14:57:30.897442 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:30 crc kubenswrapper[5107]: I1209 14:57:30.897462 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:30 crc kubenswrapper[5107]: I1209 14:57:30.897488 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:30 crc kubenswrapper[5107]: I1209 14:57:30.897507 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:30Z","lastTransitionTime":"2025-12-09T14:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.000251 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.000323 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.000423 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.000461 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.000483 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:31Z","lastTransitionTime":"2025-12-09T14:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.103088 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.103513 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.103734 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.103973 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.104187 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:31Z","lastTransitionTime":"2025-12-09T14:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.206825 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.207111 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.207226 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.207367 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.207446 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:31Z","lastTransitionTime":"2025-12-09T14:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.309717 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.309793 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.309806 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.309826 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.309840 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:31Z","lastTransitionTime":"2025-12-09T14:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.412802 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.412858 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.412884 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.412901 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.412912 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:31Z","lastTransitionTime":"2025-12-09T14:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.502550 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.502609 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.502623 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.502642 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.502654 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:31Z","lastTransitionTime":"2025-12-09T14:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:31 crc kubenswrapper[5107]: E1209 14:57:31.512903 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9b1559a0-2d18-46c4-a06d-382661d2a0c3\\\",\\\"systemUUID\\\":\\\"084757af-33e8-4017-8563-50553d5c8b31\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.517136 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.517401 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.517531 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.517568 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.517581 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:31Z","lastTransitionTime":"2025-12-09T14:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:31 crc kubenswrapper[5107]: E1209 14:57:31.528064 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9b1559a0-2d18-46c4-a06d-382661d2a0c3\\\",\\\"systemUUID\\\":\\\"084757af-33e8-4017-8563-50553d5c8b31\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.531625 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.531671 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.531685 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.531703 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.531714 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:31Z","lastTransitionTime":"2025-12-09T14:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:31 crc kubenswrapper[5107]: E1209 14:57:31.541071 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9b1559a0-2d18-46c4-a06d-382661d2a0c3\\\",\\\"systemUUID\\\":\\\"084757af-33e8-4017-8563-50553d5c8b31\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.546204 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.546260 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.546273 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.546291 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.546309 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:31Z","lastTransitionTime":"2025-12-09T14:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:31 crc kubenswrapper[5107]: E1209 14:57:31.557259 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9b1559a0-2d18-46c4-a06d-382661d2a0c3\\\",\\\"systemUUID\\\":\\\"084757af-33e8-4017-8563-50553d5c8b31\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.561553 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.561861 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.561952 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.562053 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.562148 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:31Z","lastTransitionTime":"2025-12-09T14:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:31 crc kubenswrapper[5107]: E1209 14:57:31.575313 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9b1559a0-2d18-46c4-a06d-382661d2a0c3\\\",\\\"systemUUID\\\":\\\"084757af-33e8-4017-8563-50553d5c8b31\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:31 crc kubenswrapper[5107]: E1209 14:57:31.575877 5107 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.578074 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.578143 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.578158 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.578181 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.578197 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:31Z","lastTransitionTime":"2025-12-09T14:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.681383 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.681766 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.681913 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.682035 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.682128 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:31Z","lastTransitionTime":"2025-12-09T14:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.785414 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.785755 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.785843 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.785946 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.786037 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:31Z","lastTransitionTime":"2025-12-09T14:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.817654 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:57:31 crc kubenswrapper[5107]: E1209 14:57:31.817829 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.817843 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.817924 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:57:31 crc kubenswrapper[5107]: E1209 14:57:31.818056 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:57:31 crc kubenswrapper[5107]: E1209 14:57:31.818198 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.818443 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6xk48" Dec 09 14:57:31 crc kubenswrapper[5107]: E1209 14:57:31.818676 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6xk48" podUID="f154303d-e14b-4854-8f94-194d0f338f98" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.888880 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.889526 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.889547 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.889567 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.889579 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:31Z","lastTransitionTime":"2025-12-09T14:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.992131 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.992197 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.992216 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.992240 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:31 crc kubenswrapper[5107]: I1209 14:57:31.992257 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:31Z","lastTransitionTime":"2025-12-09T14:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.095093 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.095185 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.095198 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.095437 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.095451 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:32Z","lastTransitionTime":"2025-12-09T14:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.198200 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.198255 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.198267 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.198284 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.198294 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:32Z","lastTransitionTime":"2025-12-09T14:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.300578 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.300675 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.300686 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.300704 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.300720 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:32Z","lastTransitionTime":"2025-12-09T14:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.402552 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.402656 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.402689 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.402726 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.402745 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:32Z","lastTransitionTime":"2025-12-09T14:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.505260 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.505303 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.505312 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.505327 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.505352 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:32Z","lastTransitionTime":"2025-12-09T14:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.607840 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.607894 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.607906 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.607924 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.607939 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:32Z","lastTransitionTime":"2025-12-09T14:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.710114 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.710187 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.710206 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.710231 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.710251 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:32Z","lastTransitionTime":"2025-12-09T14:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.813655 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.813742 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.813767 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.813800 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.813822 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:32Z","lastTransitionTime":"2025-12-09T14:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.831220 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-hk6gf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aac14e3-5594-400a-a5f6-f00359244626\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gcnzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hk6gf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.845027 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gfdn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f91a655-0e59-4855-bb0c-acbc64e10ed7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq48q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gfdn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.855858 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17d90584-600e-4a2c-858d-ac2100e604ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://90d46289c8ca432b46932951c07a149bbafcc7176ad9849b227f5651b06a547e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://082babd1585ebe1400244484d45e6ad956bbf945dd0d12b138be216b4fba3b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://082babd1585ebe1400244484d45e6ad956bbf945dd0d12b138be216b4fba3b8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.868128 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6efe267-4ba8-4dba-82ff-0482a7faa4cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://904e724482192717fecf2153d5eb92a8c15cff7f211c05c086e0078ea27b2c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b2ee19d9b0c3ece70ee343d7e6d4270979d47c693eb4259e6765fa9cad9e1da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9deb047f092d62f6fbf741f0935ccfd2452b426c60e8583ed86fb57f9cd9c940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1acf3dcf34f39f83e79fea39ae6c0addc25edf7d233ff3106e740d1c52c1a5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acf3dcf34f39f83e79fea39ae6c0addc25edf7d233ff3106e740d1c52c1a5a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.884463 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.895627 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.915878 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.915930 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.915943 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.915965 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.915981 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:32Z","lastTransitionTime":"2025-12-09T14:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.916860 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b75d4675-9c37-47cf-8fa3-11097aa379ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9rjcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.928614 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-g7sv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"357946f5-b5ee-4739-a2c3-62beb5aedb57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qr2g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g7sv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.942930 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.957965 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-6zphj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"035458af-eba0-4241-bcac-4e11d6358b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbbvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbbvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-6zphj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.974191 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s44qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"468a62a3-c55d-40e0-bc1f-d01a979f017a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s44qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:32 crc kubenswrapper[5107]: I1209 14:57:32.991485 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50482b5b-33e4-4375-b4ec-a1c0ebe2c67b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://a7a842582313d50b3a1310b8e3656dc938751f1b63c38bd4eef8a2b6fae1c928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://169d737323fa61136fa0b652a15316035559871415b44a78987aa3ba6b800da0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ff8ee8fa9bdc13d78b24a7ed20b77f712c6d70019508ed464dea1e799dfc607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dcfe82d78bf385817437f96e0cd90ec7f0e152520e5d2289ea61ded6ec88972e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcfe82d78bf385817437f96e0cd90ec7f0e152520e5d2289ea61ded6ec88972e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T14:57:03Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1209 14:57:02.715968 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 14:57:02.716095 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1209 14:57:02.716826 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1531350067/tls.crt::/tmp/serving-cert-1531350067/tls.key\\\\\\\"\\\\nI1209 14:57:02.999617 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 14:57:03.002958 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 14:57:03.003001 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 14:57:03.003048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 14:57:03.003055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 14:57:03.007135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 14:57:03.007175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:57:03.007181 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:57:03.007186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 14:57:03.007189 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 14:57:03.007194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 14:57:03.007197 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 14:57:03.007189 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 14:57:03.010146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:57:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bdd7c59df86281d094a2a2e4476c5b9b6871fbd53bdb2c63b08a1822f3692aed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://68a813d217d22a3cbcc2432609817ec96bf285d26990afb1a4552cae8b0ffd2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a813d217d22a3cbcc2432609817ec96bf285d26990afb1a4552cae8b0ffd2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.005802 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.018662 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.019118 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.019176 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.019189 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.019209 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.019220 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:33Z","lastTransitionTime":"2025-12-09T14:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.029527 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.039454 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6xk48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f154303d-e14b-4854-8f94-194d0f338f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlxcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlxcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6xk48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.047737 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"902902bc-6dc6-4c5f-8e1b-9399b7c813c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9jq8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.067303 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6e4831-9436-4325-a9e7-527721a1b000\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://64d96edfb4f9b0ef8edd081ac1fa1a187a546788c54bb2dda482d17bcc17d37a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://065d21f3d7d7ddf58cbad5139235697b04e42f29e61ced7e1efba1de3555ae53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bf0a296ed891533100528ac09d9e9b54984dcbd0b82dc3b71f1fdbb6a16a436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4ea5d580221066f1f04f75db2bc68afd26f37ae4331f25097f74f72a526bb4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:58Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://4459987e648ce73b2166ed2c4bda9dcb074c0f7ed1087788e224ae7445163de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://31b13cf5f094ee3d16d72628b2be8256450a040cbf2aba08914cc67b41dd22fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b13cf5f094ee3d16d72628b2be8256450a040cbf2aba08914cc67b41dd22fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://3766645d2e3d50636dc90c9b7b1e6a272a668ddd8cd9b12ec054f6c6ac82bf52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3766645d2e3d50636dc90c9b7b1e6a272a668ddd8cd9b12ec054f6c6ac82bf52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://5cc408bdf5a7d3b3392428cdf3c7bcf13053a1c50c74b5f64015cbb071affce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cc408bdf5a7d3b3392428cdf3c7bcf13053a1c50c74b5f64015cbb071affce0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:56Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.079410 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059f16de-7ff9-4b6b-9730-03b18da8c48a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d7820e8e52e22f661c6cb580380ffd7f90da43e5aa390c88f1d73ea9a5afe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://770b2ea4f5cd64cd90ef891f9e2a1f60e78f0b284926af871933d7ca8898b9ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5da0b9bfefb1775c7984b5581acb0ff5ac968a5aa3abacc68f3a3cad90bed680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3b741aedb777807194ba434397b4d2a6063c1648282792a0f733d3b340b26774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.121482 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.121534 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.121547 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.121568 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.121582 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:33Z","lastTransitionTime":"2025-12-09T14:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.224323 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.224409 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.224434 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.224469 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.224487 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:33Z","lastTransitionTime":"2025-12-09T14:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.328789 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.328904 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.328948 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.328984 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.329012 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:33Z","lastTransitionTime":"2025-12-09T14:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.431475 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.431532 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.431548 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.431570 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.431589 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:33Z","lastTransitionTime":"2025-12-09T14:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.534408 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.534491 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.534512 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.534542 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.534561 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:33Z","lastTransitionTime":"2025-12-09T14:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.637508 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.637576 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.637592 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.637617 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.637633 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:33Z","lastTransitionTime":"2025-12-09T14:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.740578 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.740658 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.740680 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.740707 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.740727 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:33Z","lastTransitionTime":"2025-12-09T14:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.758511 5107 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.817324 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6xk48" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.817455 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:57:33 crc kubenswrapper[5107]: E1209 14:57:33.817574 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6xk48" podUID="f154303d-e14b-4854-8f94-194d0f338f98" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.817455 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:57:33 crc kubenswrapper[5107]: E1209 14:57:33.817684 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:57:33 crc kubenswrapper[5107]: E1209 14:57:33.818034 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.818065 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:57:33 crc kubenswrapper[5107]: E1209 14:57:33.818232 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.843921 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.843980 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.843996 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.844015 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.844029 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:33Z","lastTransitionTime":"2025-12-09T14:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.946130 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.946224 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.946244 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.946272 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:33 crc kubenswrapper[5107]: I1209 14:57:33.946290 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:33Z","lastTransitionTime":"2025-12-09T14:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.048977 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.049423 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.049576 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.049718 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.049868 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:34Z","lastTransitionTime":"2025-12-09T14:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.153572 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.154007 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.154145 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.154271 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.154436 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:34Z","lastTransitionTime":"2025-12-09T14:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.257053 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.257097 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.257112 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.257129 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.257141 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:34Z","lastTransitionTime":"2025-12-09T14:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.359726 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.360125 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.360316 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.360605 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.360809 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:34Z","lastTransitionTime":"2025-12-09T14:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.463304 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.463428 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.463449 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.463479 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.463499 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:34Z","lastTransitionTime":"2025-12-09T14:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.565686 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.565767 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.565790 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.565817 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.565852 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:34Z","lastTransitionTime":"2025-12-09T14:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.669102 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.669181 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.669232 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.669270 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.669298 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:34Z","lastTransitionTime":"2025-12-09T14:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.771518 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.771556 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.771568 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.771583 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.771593 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:34Z","lastTransitionTime":"2025-12-09T14:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.873681 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.873737 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.873747 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.873778 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.873791 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:34Z","lastTransitionTime":"2025-12-09T14:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.976866 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.976995 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.977016 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.977042 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:34 crc kubenswrapper[5107]: I1209 14:57:34.977061 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:34Z","lastTransitionTime":"2025-12-09T14:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.079992 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.080083 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.080103 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.080131 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.080168 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:35Z","lastTransitionTime":"2025-12-09T14:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.183368 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.183421 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.183432 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.183449 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.183460 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:35Z","lastTransitionTime":"2025-12-09T14:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.286097 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.286147 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.286167 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.286188 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.286203 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:35Z","lastTransitionTime":"2025-12-09T14:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.388743 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.388809 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.388828 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.388857 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.388880 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:35Z","lastTransitionTime":"2025-12-09T14:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.491805 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.491885 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.491910 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.491938 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.491971 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:35Z","lastTransitionTime":"2025-12-09T14:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.594084 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.594169 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.594197 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.594231 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.594256 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:35Z","lastTransitionTime":"2025-12-09T14:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.696448 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.696509 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.696523 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.696546 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.696560 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:35Z","lastTransitionTime":"2025-12-09T14:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.799048 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.799089 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.799100 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.799117 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.799128 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:35Z","lastTransitionTime":"2025-12-09T14:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.817843 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.817906 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.817905 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6xk48" Dec 09 14:57:35 crc kubenswrapper[5107]: E1209 14:57:35.818022 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:57:35 crc kubenswrapper[5107]: E1209 14:57:35.818094 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.818137 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:57:35 crc kubenswrapper[5107]: E1209 14:57:35.818469 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:57:35 crc kubenswrapper[5107]: E1209 14:57:35.818477 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6xk48" podUID="f154303d-e14b-4854-8f94-194d0f338f98" Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.901894 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.902008 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.902033 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.902594 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:35 crc kubenswrapper[5107]: I1209 14:57:35.902766 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:35Z","lastTransitionTime":"2025-12-09T14:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.005754 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.005809 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.005822 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.005842 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.005856 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:36Z","lastTransitionTime":"2025-12-09T14:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.108429 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.108481 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.108492 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.108511 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.108525 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:36Z","lastTransitionTime":"2025-12-09T14:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.211870 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.211952 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.211974 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.212009 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.212030 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:36Z","lastTransitionTime":"2025-12-09T14:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.315576 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.315654 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.315680 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.315710 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.315732 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:36Z","lastTransitionTime":"2025-12-09T14:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.418470 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.418558 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.418605 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.418641 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.418661 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:36Z","lastTransitionTime":"2025-12-09T14:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.522302 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.522401 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.522420 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.522448 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.522468 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:36Z","lastTransitionTime":"2025-12-09T14:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.625689 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.625765 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.625783 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.625807 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.625826 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:36Z","lastTransitionTime":"2025-12-09T14:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.728843 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.728985 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.729019 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.729050 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.729072 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:36Z","lastTransitionTime":"2025-12-09T14:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.830976 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.831039 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.831057 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.831080 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.831099 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:36Z","lastTransitionTime":"2025-12-09T14:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.933901 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.933980 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.934006 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.934042 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:36 crc kubenswrapper[5107]: I1209 14:57:36.934066 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:36Z","lastTransitionTime":"2025-12-09T14:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.036788 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.036847 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.036859 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.036878 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.036890 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:37Z","lastTransitionTime":"2025-12-09T14:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.139773 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.139836 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.139850 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.139867 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.139880 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:37Z","lastTransitionTime":"2025-12-09T14:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.242284 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.242329 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.242367 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.242393 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.242404 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:37Z","lastTransitionTime":"2025-12-09T14:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.345418 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.345518 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.345546 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.345579 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.345603 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:37Z","lastTransitionTime":"2025-12-09T14:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.448210 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.448268 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.448298 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.448323 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.448380 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:37Z","lastTransitionTime":"2025-12-09T14:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.550589 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.550705 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.550725 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.550747 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.550762 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:37Z","lastTransitionTime":"2025-12-09T14:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.652854 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.652940 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.652955 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.652973 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.652986 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:37Z","lastTransitionTime":"2025-12-09T14:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.755048 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.755098 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.755114 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.755140 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.755167 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:37Z","lastTransitionTime":"2025-12-09T14:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.817431 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.817527 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:57:37 crc kubenswrapper[5107]: E1209 14:57:37.817616 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:57:37 crc kubenswrapper[5107]: E1209 14:57:37.817750 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.817983 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.818240 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6xk48" Dec 09 14:57:37 crc kubenswrapper[5107]: E1209 14:57:37.818251 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:57:37 crc kubenswrapper[5107]: E1209 14:57:37.818383 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6xk48" podUID="f154303d-e14b-4854-8f94-194d0f338f98" Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.857966 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.858038 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.858056 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.858081 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.858098 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:37Z","lastTransitionTime":"2025-12-09T14:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.961316 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.961438 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.961451 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.961473 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:37 crc kubenswrapper[5107]: I1209 14:57:37.961485 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:37Z","lastTransitionTime":"2025-12-09T14:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.064139 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.064186 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.064196 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.064211 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.064222 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:38Z","lastTransitionTime":"2025-12-09T14:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.167543 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.167612 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.167625 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.167648 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.167667 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:38Z","lastTransitionTime":"2025-12-09T14:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.270234 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.270315 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.270363 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.270397 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.270416 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:38Z","lastTransitionTime":"2025-12-09T14:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.373070 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.373129 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.373141 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.373160 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.373171 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:38Z","lastTransitionTime":"2025-12-09T14:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.475636 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.475689 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.475701 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.475718 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.475732 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:38Z","lastTransitionTime":"2025-12-09T14:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.578659 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.578717 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.578731 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.578749 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.578762 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:38Z","lastTransitionTime":"2025-12-09T14:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.681740 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.681805 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.681819 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.681837 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.681850 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:38Z","lastTransitionTime":"2025-12-09T14:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.784387 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.784447 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.784456 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.784472 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.784485 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:38Z","lastTransitionTime":"2025-12-09T14:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.887988 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.888480 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.888505 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.888525 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.888538 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:38Z","lastTransitionTime":"2025-12-09T14:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.897532 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-gfdn8" event={"ID":"6f91a655-0e59-4855-bb0c-acbc64e10ed7","Type":"ContainerStarted","Data":"4b7dc3aa22036d2ef5c1c3daf95f0cfecdde35ac670e0c6678f9d161b798ed08"} Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.916044 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s44qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"468a62a3-c55d-40e0-bc1f-d01a979f017a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s44qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.927727 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50482b5b-33e4-4375-b4ec-a1c0ebe2c67b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://a7a842582313d50b3a1310b8e3656dc938751f1b63c38bd4eef8a2b6fae1c928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://169d737323fa61136fa0b652a15316035559871415b44a78987aa3ba6b800da0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ff8ee8fa9bdc13d78b24a7ed20b77f712c6d70019508ed464dea1e799dfc607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dcfe82d78bf385817437f96e0cd90ec7f0e152520e5d2289ea61ded6ec88972e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcfe82d78bf385817437f96e0cd90ec7f0e152520e5d2289ea61ded6ec88972e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T14:57:03Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1209 14:57:02.715968 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 14:57:02.716095 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1209 14:57:02.716826 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1531350067/tls.crt::/tmp/serving-cert-1531350067/tls.key\\\\\\\"\\\\nI1209 14:57:02.999617 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 14:57:03.002958 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 14:57:03.003001 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 14:57:03.003048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 14:57:03.003055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 14:57:03.007135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 14:57:03.007175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:57:03.007181 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:57:03.007186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 14:57:03.007189 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 14:57:03.007194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 14:57:03.007197 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 14:57:03.007189 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 14:57:03.010146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:57:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bdd7c59df86281d094a2a2e4476c5b9b6871fbd53bdb2c63b08a1822f3692aed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://68a813d217d22a3cbcc2432609817ec96bf285d26990afb1a4552cae8b0ffd2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a813d217d22a3cbcc2432609817ec96bf285d26990afb1a4552cae8b0ffd2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.938885 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.949223 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.960740 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.971256 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6xk48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f154303d-e14b-4854-8f94-194d0f338f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlxcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlxcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6xk48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.979923 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"902902bc-6dc6-4c5f-8e1b-9399b7c813c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9jq8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.991528 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.991582 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.991596 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.991615 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.991631 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:38Z","lastTransitionTime":"2025-12-09T14:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:38 crc kubenswrapper[5107]: I1209 14:57:38.999627 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6e4831-9436-4325-a9e7-527721a1b000\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://64d96edfb4f9b0ef8edd081ac1fa1a187a546788c54bb2dda482d17bcc17d37a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://065d21f3d7d7ddf58cbad5139235697b04e42f29e61ced7e1efba1de3555ae53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bf0a296ed891533100528ac09d9e9b54984dcbd0b82dc3b71f1fdbb6a16a436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4ea5d580221066f1f04f75db2bc68afd26f37ae4331f25097f74f72a526bb4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:58Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://4459987e648ce73b2166ed2c4bda9dcb074c0f7ed1087788e224ae7445163de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://31b13cf5f094ee3d16d72628b2be8256450a040cbf2aba08914cc67b41dd22fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b13cf5f094ee3d16d72628b2be8256450a040cbf2aba08914cc67b41dd22fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://3766645d2e3d50636dc90c9b7b1e6a272a668ddd8cd9b12ec054f6c6ac82bf52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3766645d2e3d50636dc90c9b7b1e6a272a668ddd8cd9b12ec054f6c6ac82bf52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://5cc408bdf5a7d3b3392428cdf3c7bcf13053a1c50c74b5f64015cbb071affce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cc408bdf5a7d3b3392428cdf3c7bcf13053a1c50c74b5f64015cbb071affce0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:56Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.011316 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059f16de-7ff9-4b6b-9730-03b18da8c48a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d7820e8e52e22f661c6cb580380ffd7f90da43e5aa390c88f1d73ea9a5afe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://770b2ea4f5cd64cd90ef891f9e2a1f60e78f0b284926af871933d7ca8898b9ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5da0b9bfefb1775c7984b5581acb0ff5ac968a5aa3abacc68f3a3cad90bed680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3b741aedb777807194ba434397b4d2a6063c1648282792a0f733d3b340b26774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.021410 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-hk6gf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aac14e3-5594-400a-a5f6-f00359244626\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gcnzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hk6gf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.030851 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gfdn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f91a655-0e59-4855-bb0c-acbc64e10ed7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://4b7dc3aa22036d2ef5c1c3daf95f0cfecdde35ac670e0c6678f9d161b798ed08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:57:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq48q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gfdn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.039924 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17d90584-600e-4a2c-858d-ac2100e604ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://90d46289c8ca432b46932951c07a149bbafcc7176ad9849b227f5651b06a547e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://082babd1585ebe1400244484d45e6ad956bbf945dd0d12b138be216b4fba3b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://082babd1585ebe1400244484d45e6ad956bbf945dd0d12b138be216b4fba3b8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.051101 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6efe267-4ba8-4dba-82ff-0482a7faa4cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://904e724482192717fecf2153d5eb92a8c15cff7f211c05c086e0078ea27b2c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b2ee19d9b0c3ece70ee343d7e6d4270979d47c693eb4259e6765fa9cad9e1da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9deb047f092d62f6fbf741f0935ccfd2452b426c60e8583ed86fb57f9cd9c940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1acf3dcf34f39f83e79fea39ae6c0addc25edf7d233ff3106e740d1c52c1a5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acf3dcf34f39f83e79fea39ae6c0addc25edf7d233ff3106e740d1c52c1a5a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.063722 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.073490 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.089289 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b75d4675-9c37-47cf-8fa3-11097aa379ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9rjcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.096233 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.096277 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.096289 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.096307 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.096320 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:39Z","lastTransitionTime":"2025-12-09T14:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.099811 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-g7sv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"357946f5-b5ee-4739-a2c3-62beb5aedb57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qr2g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g7sv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.110749 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.121201 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-6zphj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"035458af-eba0-4241-bcac-4e11d6358b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbbvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbbvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-6zphj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.198989 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.199046 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.199059 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.199080 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.199098 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:39Z","lastTransitionTime":"2025-12-09T14:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.302363 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.302429 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.302447 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.302475 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.302493 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:39Z","lastTransitionTime":"2025-12-09T14:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.405468 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.405517 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.405531 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.405548 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.405557 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:39Z","lastTransitionTime":"2025-12-09T14:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.508143 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.508199 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.508213 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.508231 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.508242 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:39Z","lastTransitionTime":"2025-12-09T14:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.610587 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.610641 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.610651 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.610667 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.610678 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:39Z","lastTransitionTime":"2025-12-09T14:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.712589 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.712675 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.712702 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.712731 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.712752 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:39Z","lastTransitionTime":"2025-12-09T14:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.815575 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.815625 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.815637 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.815655 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.815669 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:39Z","lastTransitionTime":"2025-12-09T14:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.816894 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:57:39 crc kubenswrapper[5107]: E1209 14:57:39.817053 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.817157 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:57:39 crc kubenswrapper[5107]: E1209 14:57:39.817281 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.817348 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6xk48" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.817362 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:57:39 crc kubenswrapper[5107]: E1209 14:57:39.817402 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6xk48" podUID="f154303d-e14b-4854-8f94-194d0f338f98" Dec 09 14:57:39 crc kubenswrapper[5107]: E1209 14:57:39.817735 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.817959 5107 scope.go:117] "RemoveContainer" containerID="dcfe82d78bf385817437f96e0cd90ec7f0e152520e5d2289ea61ded6ec88972e" Dec 09 14:57:39 crc kubenswrapper[5107]: E1209 14:57:39.818143 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.901790 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-hk6gf" event={"ID":"6aac14e3-5594-400a-a5f6-f00359244626","Type":"ContainerStarted","Data":"4dac722fc9478ded4a128d8fc0c0a9ff3a7facb12fdb3fba812c61dc0b291098"} Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.904394 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g7sv4" event={"ID":"357946f5-b5ee-4739-a2c3-62beb5aedb57","Type":"ContainerStarted","Data":"c064ed809838e4dedd5ad60bcbd76d399a059f3dc74e14be811ff1103fc3bf38"} Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.914262 5107 generic.go:358] "Generic (PLEG): container finished" podID="468a62a3-c55d-40e0-bc1f-d01a979f017a" containerID="658522ee5e1caf79deca6a3763610f62eada55573450a8cea8ee9ee2701fbfa1" exitCode=0 Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.914398 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-s44qp" event={"ID":"468a62a3-c55d-40e0-bc1f-d01a979f017a","Type":"ContainerDied","Data":"658522ee5e1caf79deca6a3763610f62eada55573450a8cea8ee9ee2701fbfa1"} Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.918055 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.918112 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.918133 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.918155 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.918172 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:39Z","lastTransitionTime":"2025-12-09T14:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.921308 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.929808 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6xk48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f154303d-e14b-4854-8f94-194d0f338f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlxcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlxcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6xk48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:39 crc kubenswrapper[5107]: I1209 14:57:39.972646 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"902902bc-6dc6-4c5f-8e1b-9399b7c813c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9jq8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.006077 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6e4831-9436-4325-a9e7-527721a1b000\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://64d96edfb4f9b0ef8edd081ac1fa1a187a546788c54bb2dda482d17bcc17d37a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://065d21f3d7d7ddf58cbad5139235697b04e42f29e61ced7e1efba1de3555ae53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bf0a296ed891533100528ac09d9e9b54984dcbd0b82dc3b71f1fdbb6a16a436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4ea5d580221066f1f04f75db2bc68afd26f37ae4331f25097f74f72a526bb4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:58Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://4459987e648ce73b2166ed2c4bda9dcb074c0f7ed1087788e224ae7445163de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://31b13cf5f094ee3d16d72628b2be8256450a040cbf2aba08914cc67b41dd22fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b13cf5f094ee3d16d72628b2be8256450a040cbf2aba08914cc67b41dd22fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://3766645d2e3d50636dc90c9b7b1e6a272a668ddd8cd9b12ec054f6c6ac82bf52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3766645d2e3d50636dc90c9b7b1e6a272a668ddd8cd9b12ec054f6c6ac82bf52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://5cc408bdf5a7d3b3392428cdf3c7bcf13053a1c50c74b5f64015cbb071affce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cc408bdf5a7d3b3392428cdf3c7bcf13053a1c50c74b5f64015cbb071affce0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:56Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.021828 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059f16de-7ff9-4b6b-9730-03b18da8c48a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d7820e8e52e22f661c6cb580380ffd7f90da43e5aa390c88f1d73ea9a5afe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://770b2ea4f5cd64cd90ef891f9e2a1f60e78f0b284926af871933d7ca8898b9ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5da0b9bfefb1775c7984b5581acb0ff5ac968a5aa3abacc68f3a3cad90bed680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3b741aedb777807194ba434397b4d2a6063c1648282792a0f733d3b340b26774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.028967 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.029011 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.029020 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.029036 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.029046 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:40Z","lastTransitionTime":"2025-12-09T14:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.031375 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-hk6gf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aac14e3-5594-400a-a5f6-f00359244626\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://4dac722fc9478ded4a128d8fc0c0a9ff3a7facb12fdb3fba812c61dc0b291098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:57:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gcnzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hk6gf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.040097 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gfdn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f91a655-0e59-4855-bb0c-acbc64e10ed7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://4b7dc3aa22036d2ef5c1c3daf95f0cfecdde35ac670e0c6678f9d161b798ed08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:57:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq48q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gfdn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.049787 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17d90584-600e-4a2c-858d-ac2100e604ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://90d46289c8ca432b46932951c07a149bbafcc7176ad9849b227f5651b06a547e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://082babd1585ebe1400244484d45e6ad956bbf945dd0d12b138be216b4fba3b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://082babd1585ebe1400244484d45e6ad956bbf945dd0d12b138be216b4fba3b8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.063477 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6efe267-4ba8-4dba-82ff-0482a7faa4cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://904e724482192717fecf2153d5eb92a8c15cff7f211c05c086e0078ea27b2c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b2ee19d9b0c3ece70ee343d7e6d4270979d47c693eb4259e6765fa9cad9e1da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9deb047f092d62f6fbf741f0935ccfd2452b426c60e8583ed86fb57f9cd9c940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1acf3dcf34f39f83e79fea39ae6c0addc25edf7d233ff3106e740d1c52c1a5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acf3dcf34f39f83e79fea39ae6c0addc25edf7d233ff3106e740d1c52c1a5a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.076306 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.086920 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.103443 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b75d4675-9c37-47cf-8fa3-11097aa379ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9rjcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.115828 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-g7sv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"357946f5-b5ee-4739-a2c3-62beb5aedb57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qr2g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g7sv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.126372 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.131949 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.131984 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.131993 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.132008 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.132019 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:40Z","lastTransitionTime":"2025-12-09T14:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.137013 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-6zphj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"035458af-eba0-4241-bcac-4e11d6358b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbbvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbbvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-6zphj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.151860 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s44qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"468a62a3-c55d-40e0-bc1f-d01a979f017a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s44qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.164973 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50482b5b-33e4-4375-b4ec-a1c0ebe2c67b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://a7a842582313d50b3a1310b8e3656dc938751f1b63c38bd4eef8a2b6fae1c928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://169d737323fa61136fa0b652a15316035559871415b44a78987aa3ba6b800da0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ff8ee8fa9bdc13d78b24a7ed20b77f712c6d70019508ed464dea1e799dfc607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dcfe82d78bf385817437f96e0cd90ec7f0e152520e5d2289ea61ded6ec88972e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcfe82d78bf385817437f96e0cd90ec7f0e152520e5d2289ea61ded6ec88972e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T14:57:03Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1209 14:57:02.715968 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 14:57:02.716095 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1209 14:57:02.716826 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1531350067/tls.crt::/tmp/serving-cert-1531350067/tls.key\\\\\\\"\\\\nI1209 14:57:02.999617 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 14:57:03.002958 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 14:57:03.003001 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 14:57:03.003048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 14:57:03.003055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 14:57:03.007135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 14:57:03.007175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:57:03.007181 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:57:03.007186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 14:57:03.007189 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 14:57:03.007194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 14:57:03.007197 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 14:57:03.007189 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 14:57:03.010146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:57:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bdd7c59df86281d094a2a2e4476c5b9b6871fbd53bdb2c63b08a1822f3692aed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://68a813d217d22a3cbcc2432609817ec96bf285d26990afb1a4552cae8b0ffd2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a813d217d22a3cbcc2432609817ec96bf285d26990afb1a4552cae8b0ffd2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.178834 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.193059 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.206588 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s44qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"468a62a3-c55d-40e0-bc1f-d01a979f017a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658522ee5e1caf79deca6a3763610f62eada55573450a8cea8ee9ee2701fbfa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://658522ee5e1caf79deca6a3763610f62eada55573450a8cea8ee9ee2701fbfa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:57:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s44qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.219906 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50482b5b-33e4-4375-b4ec-a1c0ebe2c67b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://a7a842582313d50b3a1310b8e3656dc938751f1b63c38bd4eef8a2b6fae1c928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://169d737323fa61136fa0b652a15316035559871415b44a78987aa3ba6b800da0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ff8ee8fa9bdc13d78b24a7ed20b77f712c6d70019508ed464dea1e799dfc607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dcfe82d78bf385817437f96e0cd90ec7f0e152520e5d2289ea61ded6ec88972e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcfe82d78bf385817437f96e0cd90ec7f0e152520e5d2289ea61ded6ec88972e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T14:57:03Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1209 14:57:02.715968 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 14:57:02.716095 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1209 14:57:02.716826 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1531350067/tls.crt::/tmp/serving-cert-1531350067/tls.key\\\\\\\"\\\\nI1209 14:57:02.999617 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 14:57:03.002958 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 14:57:03.003001 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 14:57:03.003048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 14:57:03.003055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 14:57:03.007135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 14:57:03.007175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:57:03.007181 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:57:03.007186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 14:57:03.007189 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 14:57:03.007194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 14:57:03.007197 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 14:57:03.007189 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 14:57:03.010146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:57:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bdd7c59df86281d094a2a2e4476c5b9b6871fbd53bdb2c63b08a1822f3692aed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://68a813d217d22a3cbcc2432609817ec96bf285d26990afb1a4552cae8b0ffd2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a813d217d22a3cbcc2432609817ec96bf285d26990afb1a4552cae8b0ffd2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.230528 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.233814 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.233876 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.233894 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.233916 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.233931 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:40Z","lastTransitionTime":"2025-12-09T14:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.239694 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.249538 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.258751 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6xk48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f154303d-e14b-4854-8f94-194d0f338f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlxcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlxcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6xk48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.270519 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"902902bc-6dc6-4c5f-8e1b-9399b7c813c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9jq8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.291219 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6e4831-9436-4325-a9e7-527721a1b000\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://64d96edfb4f9b0ef8edd081ac1fa1a187a546788c54bb2dda482d17bcc17d37a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://065d21f3d7d7ddf58cbad5139235697b04e42f29e61ced7e1efba1de3555ae53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bf0a296ed891533100528ac09d9e9b54984dcbd0b82dc3b71f1fdbb6a16a436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4ea5d580221066f1f04f75db2bc68afd26f37ae4331f25097f74f72a526bb4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:58Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://4459987e648ce73b2166ed2c4bda9dcb074c0f7ed1087788e224ae7445163de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://31b13cf5f094ee3d16d72628b2be8256450a040cbf2aba08914cc67b41dd22fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b13cf5f094ee3d16d72628b2be8256450a040cbf2aba08914cc67b41dd22fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://3766645d2e3d50636dc90c9b7b1e6a272a668ddd8cd9b12ec054f6c6ac82bf52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3766645d2e3d50636dc90c9b7b1e6a272a668ddd8cd9b12ec054f6c6ac82bf52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://5cc408bdf5a7d3b3392428cdf3c7bcf13053a1c50c74b5f64015cbb071affce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cc408bdf5a7d3b3392428cdf3c7bcf13053a1c50c74b5f64015cbb071affce0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:56Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.304240 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059f16de-7ff9-4b6b-9730-03b18da8c48a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d7820e8e52e22f661c6cb580380ffd7f90da43e5aa390c88f1d73ea9a5afe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://770b2ea4f5cd64cd90ef891f9e2a1f60e78f0b284926af871933d7ca8898b9ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5da0b9bfefb1775c7984b5581acb0ff5ac968a5aa3abacc68f3a3cad90bed680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3b741aedb777807194ba434397b4d2a6063c1648282792a0f733d3b340b26774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.314263 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-hk6gf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aac14e3-5594-400a-a5f6-f00359244626\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://4dac722fc9478ded4a128d8fc0c0a9ff3a7facb12fdb3fba812c61dc0b291098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:57:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gcnzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hk6gf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.322205 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gfdn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f91a655-0e59-4855-bb0c-acbc64e10ed7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://4b7dc3aa22036d2ef5c1c3daf95f0cfecdde35ac670e0c6678f9d161b798ed08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:57:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq48q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gfdn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.331393 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17d90584-600e-4a2c-858d-ac2100e604ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://90d46289c8ca432b46932951c07a149bbafcc7176ad9849b227f5651b06a547e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://082babd1585ebe1400244484d45e6ad956bbf945dd0d12b138be216b4fba3b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://082babd1585ebe1400244484d45e6ad956bbf945dd0d12b138be216b4fba3b8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.336067 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.336118 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.336131 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.336152 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.336166 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:40Z","lastTransitionTime":"2025-12-09T14:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.341443 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6efe267-4ba8-4dba-82ff-0482a7faa4cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://904e724482192717fecf2153d5eb92a8c15cff7f211c05c086e0078ea27b2c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b2ee19d9b0c3ece70ee343d7e6d4270979d47c693eb4259e6765fa9cad9e1da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9deb047f092d62f6fbf741f0935ccfd2452b426c60e8583ed86fb57f9cd9c940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1acf3dcf34f39f83e79fea39ae6c0addc25edf7d233ff3106e740d1c52c1a5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acf3dcf34f39f83e79fea39ae6c0addc25edf7d233ff3106e740d1c52c1a5a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.355103 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.367263 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.387503 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b75d4675-9c37-47cf-8fa3-11097aa379ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9rjcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.405154 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-g7sv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"357946f5-b5ee-4739-a2c3-62beb5aedb57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://c064ed809838e4dedd5ad60bcbd76d399a059f3dc74e14be811ff1103fc3bf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:57:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qr2g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g7sv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.420688 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.431508 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-6zphj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"035458af-eba0-4241-bcac-4e11d6358b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbbvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbbvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-6zphj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.438404 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.438484 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.438496 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.438522 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.438532 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:40Z","lastTransitionTime":"2025-12-09T14:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.540699 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.540760 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.540773 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.540794 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.540805 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:40Z","lastTransitionTime":"2025-12-09T14:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.644301 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.644408 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.644429 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.644451 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.644470 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:40Z","lastTransitionTime":"2025-12-09T14:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.747135 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.747180 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.747189 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.747204 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.747214 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:40Z","lastTransitionTime":"2025-12-09T14:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.851898 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.851964 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.851981 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.852004 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.852017 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:40Z","lastTransitionTime":"2025-12-09T14:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.922163 5107 generic.go:358] "Generic (PLEG): container finished" podID="468a62a3-c55d-40e0-bc1f-d01a979f017a" containerID="320a502903008a34e6b84bff2539e68f8120604c09c6f533ade4b099a997342c" exitCode=0 Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.922253 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-s44qp" event={"ID":"468a62a3-c55d-40e0-bc1f-d01a979f017a","Type":"ContainerDied","Data":"320a502903008a34e6b84bff2539e68f8120604c09c6f533ade4b099a997342c"} Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.933723 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gfdn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f91a655-0e59-4855-bb0c-acbc64e10ed7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://4b7dc3aa22036d2ef5c1c3daf95f0cfecdde35ac670e0c6678f9d161b798ed08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:57:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq48q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gfdn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.943593 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17d90584-600e-4a2c-858d-ac2100e604ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://90d46289c8ca432b46932951c07a149bbafcc7176ad9849b227f5651b06a547e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://082babd1585ebe1400244484d45e6ad956bbf945dd0d12b138be216b4fba3b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://082babd1585ebe1400244484d45e6ad956bbf945dd0d12b138be216b4fba3b8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.954859 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.954909 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.954920 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.954937 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.954949 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:40Z","lastTransitionTime":"2025-12-09T14:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.956358 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6efe267-4ba8-4dba-82ff-0482a7faa4cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://904e724482192717fecf2153d5eb92a8c15cff7f211c05c086e0078ea27b2c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b2ee19d9b0c3ece70ee343d7e6d4270979d47c693eb4259e6765fa9cad9e1da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9deb047f092d62f6fbf741f0935ccfd2452b426c60e8583ed86fb57f9cd9c940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1acf3dcf34f39f83e79fea39ae6c0addc25edf7d233ff3106e740d1c52c1a5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acf3dcf34f39f83e79fea39ae6c0addc25edf7d233ff3106e740d1c52c1a5a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.971699 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.982204 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:40 crc kubenswrapper[5107]: I1209 14:57:40.998296 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b75d4675-9c37-47cf-8fa3-11097aa379ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9rjcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.008100 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-g7sv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"357946f5-b5ee-4739-a2c3-62beb5aedb57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://c064ed809838e4dedd5ad60bcbd76d399a059f3dc74e14be811ff1103fc3bf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:57:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qr2g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g7sv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.018070 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.027236 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-6zphj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"035458af-eba0-4241-bcac-4e11d6358b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbbvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbbvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-6zphj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.039437 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s44qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"468a62a3-c55d-40e0-bc1f-d01a979f017a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658522ee5e1caf79deca6a3763610f62eada55573450a8cea8ee9ee2701fbfa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://658522ee5e1caf79deca6a3763610f62eada55573450a8cea8ee9ee2701fbfa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:57:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320a502903008a34e6b84bff2539e68f8120604c09c6f533ade4b099a997342c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://320a502903008a34e6b84bff2539e68f8120604c09c6f533ade4b099a997342c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:57:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s44qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.051229 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50482b5b-33e4-4375-b4ec-a1c0ebe2c67b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://a7a842582313d50b3a1310b8e3656dc938751f1b63c38bd4eef8a2b6fae1c928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://169d737323fa61136fa0b652a15316035559871415b44a78987aa3ba6b800da0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ff8ee8fa9bdc13d78b24a7ed20b77f712c6d70019508ed464dea1e799dfc607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dcfe82d78bf385817437f96e0cd90ec7f0e152520e5d2289ea61ded6ec88972e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcfe82d78bf385817437f96e0cd90ec7f0e152520e5d2289ea61ded6ec88972e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T14:57:03Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1209 14:57:02.715968 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 14:57:02.716095 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1209 14:57:02.716826 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1531350067/tls.crt::/tmp/serving-cert-1531350067/tls.key\\\\\\\"\\\\nI1209 14:57:02.999617 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 14:57:03.002958 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 14:57:03.003001 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 14:57:03.003048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 14:57:03.003055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 14:57:03.007135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 14:57:03.007175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:57:03.007181 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:57:03.007186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 14:57:03.007189 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 14:57:03.007194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 14:57:03.007197 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 14:57:03.007189 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 14:57:03.010146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:57:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bdd7c59df86281d094a2a2e4476c5b9b6871fbd53bdb2c63b08a1822f3692aed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://68a813d217d22a3cbcc2432609817ec96bf285d26990afb1a4552cae8b0ffd2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a813d217d22a3cbcc2432609817ec96bf285d26990afb1a4552cae8b0ffd2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.057942 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.057989 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.058000 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.058016 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.058031 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:41Z","lastTransitionTime":"2025-12-09T14:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.061880 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.075555 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.089697 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.102929 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6xk48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f154303d-e14b-4854-8f94-194d0f338f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlxcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlxcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6xk48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.117957 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"902902bc-6dc6-4c5f-8e1b-9399b7c813c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9jq8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.139308 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6e4831-9436-4325-a9e7-527721a1b000\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://64d96edfb4f9b0ef8edd081ac1fa1a187a546788c54bb2dda482d17bcc17d37a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://065d21f3d7d7ddf58cbad5139235697b04e42f29e61ced7e1efba1de3555ae53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bf0a296ed891533100528ac09d9e9b54984dcbd0b82dc3b71f1fdbb6a16a436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4ea5d580221066f1f04f75db2bc68afd26f37ae4331f25097f74f72a526bb4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:58Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://4459987e648ce73b2166ed2c4bda9dcb074c0f7ed1087788e224ae7445163de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://31b13cf5f094ee3d16d72628b2be8256450a040cbf2aba08914cc67b41dd22fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b13cf5f094ee3d16d72628b2be8256450a040cbf2aba08914cc67b41dd22fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://3766645d2e3d50636dc90c9b7b1e6a272a668ddd8cd9b12ec054f6c6ac82bf52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3766645d2e3d50636dc90c9b7b1e6a272a668ddd8cd9b12ec054f6c6ac82bf52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://5cc408bdf5a7d3b3392428cdf3c7bcf13053a1c50c74b5f64015cbb071affce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cc408bdf5a7d3b3392428cdf3c7bcf13053a1c50c74b5f64015cbb071affce0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:56Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.153182 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059f16de-7ff9-4b6b-9730-03b18da8c48a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d7820e8e52e22f661c6cb580380ffd7f90da43e5aa390c88f1d73ea9a5afe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://770b2ea4f5cd64cd90ef891f9e2a1f60e78f0b284926af871933d7ca8898b9ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5da0b9bfefb1775c7984b5581acb0ff5ac968a5aa3abacc68f3a3cad90bed680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3b741aedb777807194ba434397b4d2a6063c1648282792a0f733d3b340b26774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.160444 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.160503 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.160514 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.160535 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.160549 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:41Z","lastTransitionTime":"2025-12-09T14:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.162677 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-hk6gf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aac14e3-5594-400a-a5f6-f00359244626\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://4dac722fc9478ded4a128d8fc0c0a9ff3a7facb12fdb3fba812c61dc0b291098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:57:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gcnzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hk6gf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.263966 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.264061 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.264084 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.264116 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.264208 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:41Z","lastTransitionTime":"2025-12-09T14:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.365773 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.365825 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.365838 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.365854 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.365865 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:41Z","lastTransitionTime":"2025-12-09T14:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.467955 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.468001 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.468013 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.468030 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.468041 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:41Z","lastTransitionTime":"2025-12-09T14:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.570206 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.570254 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.570267 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.570284 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.570297 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:41Z","lastTransitionTime":"2025-12-09T14:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.673363 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.673422 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.673431 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.673448 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.673459 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:41Z","lastTransitionTime":"2025-12-09T14:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.776034 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.776077 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.776087 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.776103 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.776113 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:41Z","lastTransitionTime":"2025-12-09T14:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.817018 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:57:41 crc kubenswrapper[5107]: E1209 14:57:41.817153 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.817191 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6xk48" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.817255 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:57:41 crc kubenswrapper[5107]: E1209 14:57:41.817446 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6xk48" podUID="f154303d-e14b-4854-8f94-194d0f338f98" Dec 09 14:57:41 crc kubenswrapper[5107]: E1209 14:57:41.817567 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.818103 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:57:41 crc kubenswrapper[5107]: E1209 14:57:41.818191 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.832010 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.832073 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.832088 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.832108 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.832121 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:41Z","lastTransitionTime":"2025-12-09T14:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:41 crc kubenswrapper[5107]: E1209 14:57:41.844007 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9b1559a0-2d18-46c4-a06d-382661d2a0c3\\\",\\\"systemUUID\\\":\\\"084757af-33e8-4017-8563-50553d5c8b31\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.847209 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.847274 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.847289 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.847311 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.847325 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:41Z","lastTransitionTime":"2025-12-09T14:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:41 crc kubenswrapper[5107]: E1209 14:57:41.859020 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9b1559a0-2d18-46c4-a06d-382661d2a0c3\\\",\\\"systemUUID\\\":\\\"084757af-33e8-4017-8563-50553d5c8b31\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.863037 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.863078 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.863087 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.863107 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.863118 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:41Z","lastTransitionTime":"2025-12-09T14:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:41 crc kubenswrapper[5107]: E1209 14:57:41.874785 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9b1559a0-2d18-46c4-a06d-382661d2a0c3\\\",\\\"systemUUID\\\":\\\"084757af-33e8-4017-8563-50553d5c8b31\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.879475 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.879528 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.879541 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.879562 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.879575 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:41Z","lastTransitionTime":"2025-12-09T14:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:41 crc kubenswrapper[5107]: E1209 14:57:41.891116 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9b1559a0-2d18-46c4-a06d-382661d2a0c3\\\",\\\"systemUUID\\\":\\\"084757af-33e8-4017-8563-50553d5c8b31\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.894460 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.894505 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.894519 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.894543 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.894558 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:41Z","lastTransitionTime":"2025-12-09T14:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:41 crc kubenswrapper[5107]: E1209 14:57:41.906083 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9b1559a0-2d18-46c4-a06d-382661d2a0c3\\\",\\\"systemUUID\\\":\\\"084757af-33e8-4017-8563-50553d5c8b31\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:41 crc kubenswrapper[5107]: E1209 14:57:41.906279 5107 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.908118 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.908180 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.908195 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.908217 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.908234 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:41Z","lastTransitionTime":"2025-12-09T14:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.935726 5107 generic.go:358] "Generic (PLEG): container finished" podID="468a62a3-c55d-40e0-bc1f-d01a979f017a" containerID="00e56ec00df6b23a5dee59b6c8bfc122a53832bd74806bc688e8fd59fc47302e" exitCode=0 Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.935842 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-s44qp" event={"ID":"468a62a3-c55d-40e0-bc1f-d01a979f017a","Type":"ContainerDied","Data":"00e56ec00df6b23a5dee59b6c8bfc122a53832bd74806bc688e8fd59fc47302e"} Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.946693 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-hk6gf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aac14e3-5594-400a-a5f6-f00359244626\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://4dac722fc9478ded4a128d8fc0c0a9ff3a7facb12fdb3fba812c61dc0b291098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:57:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gcnzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hk6gf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.951722 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-6zphj" event={"ID":"035458af-eba0-4241-bcac-4e11d6358b21","Type":"ContainerStarted","Data":"8eb899657bdc52cb8444c544352a0c306b439d5fe4c54705d994a6ac368a93e8"} Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.951780 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-6zphj" event={"ID":"035458af-eba0-4241-bcac-4e11d6358b21","Type":"ContainerStarted","Data":"d4a3745133e65e2be2bd2cbac84be5bd1275f003ea6f2c4be8b1ccb707efe066"} Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.955741 5107 generic.go:358] "Generic (PLEG): container finished" podID="b75d4675-9c37-47cf-8fa3-11097aa379ca" containerID="84b6d2404a4128c8b4f10eaac65fd21b1f2f5d59cb8517a27e2dc41709a2e709" exitCode=0 Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.955831 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" event={"ID":"b75d4675-9c37-47cf-8fa3-11097aa379ca","Type":"ContainerDied","Data":"84b6d2404a4128c8b4f10eaac65fd21b1f2f5d59cb8517a27e2dc41709a2e709"} Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.960758 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gfdn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f91a655-0e59-4855-bb0c-acbc64e10ed7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://4b7dc3aa22036d2ef5c1c3daf95f0cfecdde35ac670e0c6678f9d161b798ed08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:57:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq48q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gfdn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.972004 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17d90584-600e-4a2c-858d-ac2100e604ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://90d46289c8ca432b46932951c07a149bbafcc7176ad9849b227f5651b06a547e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://082babd1585ebe1400244484d45e6ad956bbf945dd0d12b138be216b4fba3b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://082babd1585ebe1400244484d45e6ad956bbf945dd0d12b138be216b4fba3b8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.982218 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6efe267-4ba8-4dba-82ff-0482a7faa4cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://904e724482192717fecf2153d5eb92a8c15cff7f211c05c086e0078ea27b2c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b2ee19d9b0c3ece70ee343d7e6d4270979d47c693eb4259e6765fa9cad9e1da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9deb047f092d62f6fbf741f0935ccfd2452b426c60e8583ed86fb57f9cd9c940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1acf3dcf34f39f83e79fea39ae6c0addc25edf7d233ff3106e740d1c52c1a5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acf3dcf34f39f83e79fea39ae6c0addc25edf7d233ff3106e740d1c52c1a5a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:41 crc kubenswrapper[5107]: I1209 14:57:41.992549 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.005158 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.012172 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.012216 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.012228 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.012245 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.012261 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:42Z","lastTransitionTime":"2025-12-09T14:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.034290 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b75d4675-9c37-47cf-8fa3-11097aa379ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9rjcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.046592 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-g7sv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"357946f5-b5ee-4739-a2c3-62beb5aedb57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://c064ed809838e4dedd5ad60bcbd76d399a059f3dc74e14be811ff1103fc3bf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:57:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qr2g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g7sv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.059824 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.071150 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-6zphj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"035458af-eba0-4241-bcac-4e11d6358b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbbvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbbvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-6zphj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.085623 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s44qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"468a62a3-c55d-40e0-bc1f-d01a979f017a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658522ee5e1caf79deca6a3763610f62eada55573450a8cea8ee9ee2701fbfa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://658522ee5e1caf79deca6a3763610f62eada55573450a8cea8ee9ee2701fbfa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:57:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320a502903008a34e6b84bff2539e68f8120604c09c6f533ade4b099a997342c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://320a502903008a34e6b84bff2539e68f8120604c09c6f533ade4b099a997342c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:57:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00e56ec00df6b23a5dee59b6c8bfc122a53832bd74806bc688e8fd59fc47302e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00e56ec00df6b23a5dee59b6c8bfc122a53832bd74806bc688e8fd59fc47302e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:57:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s44qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.101695 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50482b5b-33e4-4375-b4ec-a1c0ebe2c67b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://a7a842582313d50b3a1310b8e3656dc938751f1b63c38bd4eef8a2b6fae1c928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://169d737323fa61136fa0b652a15316035559871415b44a78987aa3ba6b800da0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ff8ee8fa9bdc13d78b24a7ed20b77f712c6d70019508ed464dea1e799dfc607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dcfe82d78bf385817437f96e0cd90ec7f0e152520e5d2289ea61ded6ec88972e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcfe82d78bf385817437f96e0cd90ec7f0e152520e5d2289ea61ded6ec88972e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T14:57:03Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1209 14:57:02.715968 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 14:57:02.716095 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1209 14:57:02.716826 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1531350067/tls.crt::/tmp/serving-cert-1531350067/tls.key\\\\\\\"\\\\nI1209 14:57:02.999617 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 14:57:03.002958 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 14:57:03.003001 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 14:57:03.003048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 14:57:03.003055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 14:57:03.007135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 14:57:03.007175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:57:03.007181 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:57:03.007186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 14:57:03.007189 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 14:57:03.007194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 14:57:03.007197 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 14:57:03.007189 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 14:57:03.010146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:57:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bdd7c59df86281d094a2a2e4476c5b9b6871fbd53bdb2c63b08a1822f3692aed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://68a813d217d22a3cbcc2432609817ec96bf285d26990afb1a4552cae8b0ffd2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a813d217d22a3cbcc2432609817ec96bf285d26990afb1a4552cae8b0ffd2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.116470 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.116519 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.116529 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.116553 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.116565 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:42Z","lastTransitionTime":"2025-12-09T14:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.117372 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.128746 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.149082 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.160474 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6xk48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f154303d-e14b-4854-8f94-194d0f338f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlxcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlxcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6xk48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.171493 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"902902bc-6dc6-4c5f-8e1b-9399b7c813c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9jq8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.194185 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6e4831-9436-4325-a9e7-527721a1b000\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://64d96edfb4f9b0ef8edd081ac1fa1a187a546788c54bb2dda482d17bcc17d37a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://065d21f3d7d7ddf58cbad5139235697b04e42f29e61ced7e1efba1de3555ae53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bf0a296ed891533100528ac09d9e9b54984dcbd0b82dc3b71f1fdbb6a16a436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4ea5d580221066f1f04f75db2bc68afd26f37ae4331f25097f74f72a526bb4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:58Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://4459987e648ce73b2166ed2c4bda9dcb074c0f7ed1087788e224ae7445163de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://31b13cf5f094ee3d16d72628b2be8256450a040cbf2aba08914cc67b41dd22fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31b13cf5f094ee3d16d72628b2be8256450a040cbf2aba08914cc67b41dd22fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://3766645d2e3d50636dc90c9b7b1e6a272a668ddd8cd9b12ec054f6c6ac82bf52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3766645d2e3d50636dc90c9b7b1e6a272a668ddd8cd9b12ec054f6c6ac82bf52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://5cc408bdf5a7d3b3392428cdf3c7bcf13053a1c50c74b5f64015cbb071affce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cc408bdf5a7d3b3392428cdf3c7bcf13053a1c50c74b5f64015cbb071affce0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:56Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.208476 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059f16de-7ff9-4b6b-9730-03b18da8c48a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d7820e8e52e22f661c6cb580380ffd7f90da43e5aa390c88f1d73ea9a5afe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://770b2ea4f5cd64cd90ef891f9e2a1f60e78f0b284926af871933d7ca8898b9ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5da0b9bfefb1775c7984b5581acb0ff5ac968a5aa3abacc68f3a3cad90bed680\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3b741aedb777807194ba434397b4d2a6063c1648282792a0f733d3b340b26774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.218656 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.218701 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.218712 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.218728 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.218739 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:42Z","lastTransitionTime":"2025-12-09T14:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.224031 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-hk6gf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aac14e3-5594-400a-a5f6-f00359244626\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://4dac722fc9478ded4a128d8fc0c0a9ff3a7facb12fdb3fba812c61dc0b291098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:57:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gcnzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hk6gf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.239803 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gfdn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f91a655-0e59-4855-bb0c-acbc64e10ed7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://4b7dc3aa22036d2ef5c1c3daf95f0cfecdde35ac670e0c6678f9d161b798ed08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:57:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq48q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gfdn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.250578 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17d90584-600e-4a2c-858d-ac2100e604ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://90d46289c8ca432b46932951c07a149bbafcc7176ad9849b227f5651b06a547e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://082babd1585ebe1400244484d45e6ad956bbf945dd0d12b138be216b4fba3b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://082babd1585ebe1400244484d45e6ad956bbf945dd0d12b138be216b4fba3b8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.262604 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6efe267-4ba8-4dba-82ff-0482a7faa4cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:56:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://904e724482192717fecf2153d5eb92a8c15cff7f211c05c086e0078ea27b2c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b2ee19d9b0c3ece70ee343d7e6d4270979d47c693eb4259e6765fa9cad9e1da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9deb047f092d62f6fbf741f0935ccfd2452b426c60e8583ed86fb57f9cd9c940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1acf3dcf34f39f83e79fea39ae6c0addc25edf7d233ff3106e740d1c52c1a5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acf3dcf34f39f83e79fea39ae6c0addc25edf7d233ff3106e740d1c52c1a5a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.274027 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.285703 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.303257 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b75d4675-9c37-47cf-8fa3-11097aa379ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b6d2404a4128c8b4f10eaac65fd21b1f2f5d59cb8517a27e2dc41709a2e709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b6d2404a4128c8b4f10eaac65fd21b1f2f5d59cb8517a27e2dc41709a2e709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:57:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljp8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9rjcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.315527 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-g7sv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"357946f5-b5ee-4739-a2c3-62beb5aedb57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://c064ed809838e4dedd5ad60bcbd76d399a059f3dc74e14be811ff1103fc3bf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:57:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qr2g2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g7sv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.320785 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.320850 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.320867 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.320888 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.320900 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:42Z","lastTransitionTime":"2025-12-09T14:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.330869 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.346106 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-6zphj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"035458af-eba0-4241-bcac-4e11d6358b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://d4a3745133e65e2be2bd2cbac84be5bd1275f003ea6f2c4be8b1ccb707efe066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:57:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbbvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://8eb899657bdc52cb8444c544352a0c306b439d5fe4c54705d994a6ac368a93e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:57:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbbvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-6zphj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.365654 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s44qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"468a62a3-c55d-40e0-bc1f-d01a979f017a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658522ee5e1caf79deca6a3763610f62eada55573450a8cea8ee9ee2701fbfa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://658522ee5e1caf79deca6a3763610f62eada55573450a8cea8ee9ee2701fbfa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:57:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320a502903008a34e6b84bff2539e68f8120604c09c6f533ade4b099a997342c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://320a502903008a34e6b84bff2539e68f8120604c09c6f533ade4b099a997342c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:57:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00e56ec00df6b23a5dee59b6c8bfc122a53832bd74806bc688e8fd59fc47302e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00e56ec00df6b23a5dee59b6c8bfc122a53832bd74806bc688e8fd59fc47302e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:57:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4hsll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:57:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s44qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.386793 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50482b5b-33e4-4375-b4ec-a1c0ebe2c67b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://a7a842582313d50b3a1310b8e3656dc938751f1b63c38bd4eef8a2b6fae1c928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://169d737323fa61136fa0b652a15316035559871415b44a78987aa3ba6b800da0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ff8ee8fa9bdc13d78b24a7ed20b77f712c6d70019508ed464dea1e799dfc607\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dcfe82d78bf385817437f96e0cd90ec7f0e152520e5d2289ea61ded6ec88972e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcfe82d78bf385817437f96e0cd90ec7f0e152520e5d2289ea61ded6ec88972e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T14:57:03Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1209 14:57:02.715968 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 14:57:02.716095 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1209 14:57:02.716826 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1531350067/tls.crt::/tmp/serving-cert-1531350067/tls.key\\\\\\\"\\\\nI1209 14:57:02.999617 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 14:57:03.002958 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 14:57:03.003001 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 14:57:03.003048 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 14:57:03.003055 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 14:57:03.007135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 14:57:03.007175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:57:03.007181 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:57:03.007186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 14:57:03.007189 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 14:57:03.007194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 14:57:03.007197 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 14:57:03.007189 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 14:57:03.010146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:57:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bdd7c59df86281d094a2a2e4476c5b9b6871fbd53bdb2c63b08a1822f3692aed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:55:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://68a813d217d22a3cbcc2432609817ec96bf285d26990afb1a4552cae8b0ffd2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a813d217d22a3cbcc2432609817ec96bf285d26990afb1a4552cae8b0ffd2b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:55:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:55:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.404587 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.420030 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.424532 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.424587 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.424601 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.424819 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.424832 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:42Z","lastTransitionTime":"2025-12-09T14:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.435390 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:57:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.509118 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=29.509092622 podStartE2EDuration="29.509092622s" podCreationTimestamp="2025-12-09 14:57:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:57:42.508604058 +0000 UTC m=+110.232308957" watchObservedRunningTime="2025-12-09 14:57:42.509092622 +0000 UTC m=+110.232797511" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.526511 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.526545 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.526553 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.526568 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.526577 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:42Z","lastTransitionTime":"2025-12-09T14:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.528320 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=28.52829825 podStartE2EDuration="28.52829825s" podCreationTimestamp="2025-12-09 14:57:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:57:42.527673113 +0000 UTC m=+110.251378012" watchObservedRunningTime="2025-12-09 14:57:42.52829825 +0000 UTC m=+110.252003139" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.628779 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.629078 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.629109 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.629261 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.629289 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:42Z","lastTransitionTime":"2025-12-09T14:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.732284 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.732359 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.732373 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.732390 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.732400 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:42Z","lastTransitionTime":"2025-12-09T14:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.838231 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.838774 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.838787 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.838802 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.838813 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:42Z","lastTransitionTime":"2025-12-09T14:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.899172 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-hk6gf" podStartSLOduration=91.899145008 podStartE2EDuration="1m31.899145008s" podCreationTimestamp="2025-12-09 14:56:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:57:42.882301213 +0000 UTC m=+110.606006102" watchObservedRunningTime="2025-12-09 14:57:42.899145008 +0000 UTC m=+110.622849897" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.899542 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-gfdn8" podStartSLOduration=90.899538539 podStartE2EDuration="1m30.899538539s" podCreationTimestamp="2025-12-09 14:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:57:42.898892151 +0000 UTC m=+110.622597040" watchObservedRunningTime="2025-12-09 14:57:42.899538539 +0000 UTC m=+110.623243428" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.909499 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=29.909474777 podStartE2EDuration="29.909474777s" podCreationTimestamp="2025-12-09 14:57:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:57:42.909094486 +0000 UTC m=+110.632799375" watchObservedRunningTime="2025-12-09 14:57:42.909474777 +0000 UTC m=+110.633179666" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.945895 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.946600 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.946615 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.946633 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.946645 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:42Z","lastTransitionTime":"2025-12-09T14:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.947358 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=29.94732061 podStartE2EDuration="29.94732061s" podCreationTimestamp="2025-12-09 14:57:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:57:42.931579965 +0000 UTC m=+110.655284864" watchObservedRunningTime="2025-12-09 14:57:42.94732061 +0000 UTC m=+110.671025499" Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.974532 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"5b814ea89f64507dc4754dd47ce98428b40218fd8b8ae7c14ed43be7f5b529ef"} Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.974829 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"a3b2de2366afc14b9d76efe546ae3ab87e3583345023d686d7cf69cd44aca34d"} Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.977869 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" event={"ID":"902902bc-6dc6-4c5f-8e1b-9399b7c813c7","Type":"ContainerStarted","Data":"c9ff12bb0bcdabee200b9ad2d18d95ed57d2bf2ef0990fb07cb22ecab1d5e617"} Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.977898 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" event={"ID":"902902bc-6dc6-4c5f-8e1b-9399b7c813c7","Type":"ContainerStarted","Data":"b2aa0096800e428256498a02040f82f95456606c0cce4908c7d12daee3bee806"} Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.981025 5107 generic.go:358] "Generic (PLEG): container finished" podID="468a62a3-c55d-40e0-bc1f-d01a979f017a" containerID="f9ace0c786751ff90f6536c6cd5a89b4220e175c8e3803df5dea430bf6341506" exitCode=0 Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.981133 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-s44qp" event={"ID":"468a62a3-c55d-40e0-bc1f-d01a979f017a","Type":"ContainerDied","Data":"f9ace0c786751ff90f6536c6cd5a89b4220e175c8e3803df5dea430bf6341506"} Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.995715 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" event={"ID":"b75d4675-9c37-47cf-8fa3-11097aa379ca","Type":"ContainerStarted","Data":"30fb1cef95981187208b2322962356f128220ccbce6500a3985bbe99ab43c3d9"} Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.995771 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" event={"ID":"b75d4675-9c37-47cf-8fa3-11097aa379ca","Type":"ContainerStarted","Data":"9d9242463b71252c18cb08500d43ca4b4e22fd2eb01c223088f8237f55e91be4"} Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.995782 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" event={"ID":"b75d4675-9c37-47cf-8fa3-11097aa379ca","Type":"ContainerStarted","Data":"8c18c6bed23088ffc76fd043c2c7d5d9712210c38611642bd7f7ebc5810ab221"} Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.995791 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" event={"ID":"b75d4675-9c37-47cf-8fa3-11097aa379ca","Type":"ContainerStarted","Data":"25564c5cf5484e1a41682c97f3809d0f43a3b4f650c2de6755b1f61cf496646b"} Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.995800 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" event={"ID":"b75d4675-9c37-47cf-8fa3-11097aa379ca","Type":"ContainerStarted","Data":"f03256890d3adb9b2fac5c7bdc862ddf606134e007253842b1682a19e541658a"} Dec 09 14:57:42 crc kubenswrapper[5107]: I1209 14:57:42.995809 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" event={"ID":"b75d4675-9c37-47cf-8fa3-11097aa379ca","Type":"ContainerStarted","Data":"2b0bb45e7cfd1d29d0c683f330b9ec1bc1b9e55ea43414f5d3547a56c18f5be2"} Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.015994 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-g7sv4" podStartSLOduration=90.015966204 podStartE2EDuration="1m30.015966204s" podCreationTimestamp="2025-12-09 14:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:57:43.000490076 +0000 UTC m=+110.724194975" watchObservedRunningTime="2025-12-09 14:57:43.015966204 +0000 UTC m=+110.739671093" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.050459 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-6zphj" podStartSLOduration=90.050438035 podStartE2EDuration="1m30.050438035s" podCreationTimestamp="2025-12-09 14:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:57:43.029795568 +0000 UTC m=+110.753500467" watchObservedRunningTime="2025-12-09 14:57:43.050438035 +0000 UTC m=+110.774142924" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.051158 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.051193 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.051205 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.051225 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.051240 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:43Z","lastTransitionTime":"2025-12-09T14:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.136939 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" podStartSLOduration=90.136913511 podStartE2EDuration="1m30.136913511s" podCreationTimestamp="2025-12-09 14:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:57:43.136681625 +0000 UTC m=+110.860386514" watchObservedRunningTime="2025-12-09 14:57:43.136913511 +0000 UTC m=+110.860618400" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.155914 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.155988 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.156004 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.156028 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.156047 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:43Z","lastTransitionTime":"2025-12-09T14:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.259612 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.259674 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.259695 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.259719 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.259735 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:43Z","lastTransitionTime":"2025-12-09T14:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.366860 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.366915 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.366930 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.366952 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.366966 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:43Z","lastTransitionTime":"2025-12-09T14:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.469049 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.469099 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.469113 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.469131 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.469143 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:43Z","lastTransitionTime":"2025-12-09T14:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.571240 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.571304 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.571315 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.571357 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.571370 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:43Z","lastTransitionTime":"2025-12-09T14:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.674401 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.674920 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.674933 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.674952 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.674964 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:43Z","lastTransitionTime":"2025-12-09T14:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.777253 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.777295 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.777304 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.777319 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.777350 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:43Z","lastTransitionTime":"2025-12-09T14:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.818034 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:57:43 crc kubenswrapper[5107]: E1209 14:57:43.818301 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.818504 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6xk48" Dec 09 14:57:43 crc kubenswrapper[5107]: E1209 14:57:43.818589 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6xk48" podUID="f154303d-e14b-4854-8f94-194d0f338f98" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.818620 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:57:43 crc kubenswrapper[5107]: E1209 14:57:43.818665 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.818690 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:57:43 crc kubenswrapper[5107]: E1209 14:57:43.818735 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.881153 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.881247 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.881260 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.881279 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.881294 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:43Z","lastTransitionTime":"2025-12-09T14:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.984049 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.984107 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.984119 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.984135 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:43 crc kubenswrapper[5107]: I1209 14:57:43.984146 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:43Z","lastTransitionTime":"2025-12-09T14:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.002584 5107 generic.go:358] "Generic (PLEG): container finished" podID="468a62a3-c55d-40e0-bc1f-d01a979f017a" containerID="df8232f7456b9f10116d993af81d2adf1ad5610e1248317edb5b3a2c16c21184" exitCode=0 Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.002755 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-s44qp" event={"ID":"468a62a3-c55d-40e0-bc1f-d01a979f017a","Type":"ContainerDied","Data":"df8232f7456b9f10116d993af81d2adf1ad5610e1248317edb5b3a2c16c21184"} Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.005466 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"86094ce98a42ae0fcaf65eebc1eee4d5d4ef85dc8d1b61034672622f2eb915bc"} Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.087258 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.087353 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.087368 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.087387 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.087427 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:44Z","lastTransitionTime":"2025-12-09T14:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.191651 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.191712 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.191725 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.191744 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.191759 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:44Z","lastTransitionTime":"2025-12-09T14:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.294776 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.294822 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.294832 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.294848 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.294866 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:44Z","lastTransitionTime":"2025-12-09T14:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.397009 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.397048 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.397056 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.397075 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.397087 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:44Z","lastTransitionTime":"2025-12-09T14:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.499132 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.499178 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.499187 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.499220 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.499230 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:44Z","lastTransitionTime":"2025-12-09T14:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.601824 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.601893 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.601904 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.601925 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.601937 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:44Z","lastTransitionTime":"2025-12-09T14:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.704902 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.704970 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.704989 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.705014 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.705032 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:44Z","lastTransitionTime":"2025-12-09T14:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.808188 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.808255 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.808328 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.808387 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.808401 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:44Z","lastTransitionTime":"2025-12-09T14:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.911924 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.911981 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.911995 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.912017 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:44 crc kubenswrapper[5107]: I1209 14:57:44.912031 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:44Z","lastTransitionTime":"2025-12-09T14:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.013595 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.013643 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.013659 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.013676 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.013688 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:45Z","lastTransitionTime":"2025-12-09T14:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.016105 5107 generic.go:358] "Generic (PLEG): container finished" podID="468a62a3-c55d-40e0-bc1f-d01a979f017a" containerID="85377cdbcfb8514b95be7df07bde5a280bd41d3018d5edce129e1ed19a8f2732" exitCode=0 Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.016190 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-s44qp" event={"ID":"468a62a3-c55d-40e0-bc1f-d01a979f017a","Type":"ContainerDied","Data":"85377cdbcfb8514b95be7df07bde5a280bd41d3018d5edce129e1ed19a8f2732"} Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.117326 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.117880 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.117904 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.117945 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.117960 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:45Z","lastTransitionTime":"2025-12-09T14:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.220950 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.221007 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.221017 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.221053 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.221071 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:45Z","lastTransitionTime":"2025-12-09T14:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.324148 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.324230 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.324255 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.324285 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.324308 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:45Z","lastTransitionTime":"2025-12-09T14:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.431169 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.431235 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.431245 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.431262 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.431274 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:45Z","lastTransitionTime":"2025-12-09T14:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.533923 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.533977 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.533991 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.534009 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.534024 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:45Z","lastTransitionTime":"2025-12-09T14:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.636320 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.636380 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.636389 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.636403 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.636414 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:45Z","lastTransitionTime":"2025-12-09T14:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.672410 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.672518 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.672549 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.672586 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.672604 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:57:45 crc kubenswrapper[5107]: E1209 14:57:45.672727 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:57:45 crc kubenswrapper[5107]: E1209 14:57:45.672750 5107 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:57:45 crc kubenswrapper[5107]: E1209 14:57:45.672809 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:57:45 crc kubenswrapper[5107]: E1209 14:57:45.672808 5107 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:57:45 crc kubenswrapper[5107]: E1209 14:57:45.672823 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:57:45 crc kubenswrapper[5107]: E1209 14:57:45.672850 5107 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:57:45 crc kubenswrapper[5107]: E1209 14:57:45.672753 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:57:45 crc kubenswrapper[5107]: E1209 14:57:45.672822 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-09 14:58:17.672802897 +0000 UTC m=+145.396507786 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:57:45 crc kubenswrapper[5107]: E1209 14:57:45.672877 5107 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:57:45 crc kubenswrapper[5107]: E1209 14:57:45.672896 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-09 14:58:17.672883299 +0000 UTC m=+145.396588188 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:57:45 crc kubenswrapper[5107]: E1209 14:57:45.672914 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:17.672904309 +0000 UTC m=+145.396609198 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:57:45 crc kubenswrapper[5107]: E1209 14:57:45.672930 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-09 14:58:17.67292251 +0000 UTC m=+145.396627399 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:57:45 crc kubenswrapper[5107]: E1209 14:57:45.672942 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-09 14:58:17.67293606 +0000 UTC m=+145.396640949 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.739327 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.739395 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.739404 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.739422 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.739435 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:45Z","lastTransitionTime":"2025-12-09T14:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.773642 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f154303d-e14b-4854-8f94-194d0f338f98-metrics-certs\") pod \"network-metrics-daemon-6xk48\" (UID: \"f154303d-e14b-4854-8f94-194d0f338f98\") " pod="openshift-multus/network-metrics-daemon-6xk48" Dec 09 14:57:45 crc kubenswrapper[5107]: E1209 14:57:45.773870 5107 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:57:45 crc kubenswrapper[5107]: E1209 14:57:45.773974 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f154303d-e14b-4854-8f94-194d0f338f98-metrics-certs podName:f154303d-e14b-4854-8f94-194d0f338f98 nodeName:}" failed. No retries permitted until 2025-12-09 14:58:17.77395248 +0000 UTC m=+145.497657369 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f154303d-e14b-4854-8f94-194d0f338f98-metrics-certs") pod "network-metrics-daemon-6xk48" (UID: "f154303d-e14b-4854-8f94-194d0f338f98") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.816924 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6xk48" Dec 09 14:57:45 crc kubenswrapper[5107]: E1209 14:57:45.817422 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6xk48" podUID="f154303d-e14b-4854-8f94-194d0f338f98" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.817038 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:57:45 crc kubenswrapper[5107]: E1209 14:57:45.817509 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.817082 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:57:45 crc kubenswrapper[5107]: E1209 14:57:45.817563 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.816974 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:57:45 crc kubenswrapper[5107]: E1209 14:57:45.817618 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.841972 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.842031 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.842043 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.842066 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.842079 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:45Z","lastTransitionTime":"2025-12-09T14:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.944823 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.944877 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.944889 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.944908 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:45 crc kubenswrapper[5107]: I1209 14:57:45.944919 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:45Z","lastTransitionTime":"2025-12-09T14:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.026859 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-s44qp" event={"ID":"468a62a3-c55d-40e0-bc1f-d01a979f017a","Type":"ContainerStarted","Data":"3e64495338023747a53dadbfa25805236c47130fff7a0ee69f61503263300df2"} Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.028633 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"5272d4643fb9489b1dbe7e9df4d3b9dc6b74fa700a13b1ce216e1c44d96bfdae"} Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.033278 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" event={"ID":"b75d4675-9c37-47cf-8fa3-11097aa379ca","Type":"ContainerStarted","Data":"8471ce2736ffcdf0a726fb416f7052bea6d7a64059254fadad2f2c5d7a22a6bf"} Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.048151 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.048225 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.048239 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.048268 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.048283 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:46Z","lastTransitionTime":"2025-12-09T14:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.062896 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-s44qp" podStartSLOduration=93.062871505 podStartE2EDuration="1m33.062871505s" podCreationTimestamp="2025-12-09 14:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:57:46.049916585 +0000 UTC m=+113.773621504" watchObservedRunningTime="2025-12-09 14:57:46.062871505 +0000 UTC m=+113.786576384" Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.151520 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.151569 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.151579 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.151598 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.151608 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:46Z","lastTransitionTime":"2025-12-09T14:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.254248 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.254431 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.254465 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.254500 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.254526 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:46Z","lastTransitionTime":"2025-12-09T14:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.357321 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.357411 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.357427 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.357445 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.357458 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:46Z","lastTransitionTime":"2025-12-09T14:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.460198 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.460260 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.460280 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.460304 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.460324 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:46Z","lastTransitionTime":"2025-12-09T14:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.562810 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.562908 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.562926 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.562948 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.562965 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:46Z","lastTransitionTime":"2025-12-09T14:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.666412 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.666523 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.666544 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.666572 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.666586 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:46Z","lastTransitionTime":"2025-12-09T14:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.768786 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.768839 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.768849 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.768866 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.768876 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:46Z","lastTransitionTime":"2025-12-09T14:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.871632 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.871692 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.871703 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.871723 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.871735 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:46Z","lastTransitionTime":"2025-12-09T14:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.974204 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.974257 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.974270 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.974291 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:46 crc kubenswrapper[5107]: I1209 14:57:46.974304 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:46Z","lastTransitionTime":"2025-12-09T14:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.077429 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.077592 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.077608 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.077625 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.077635 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:47Z","lastTransitionTime":"2025-12-09T14:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.179812 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.179863 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.179874 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.179892 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.179902 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:47Z","lastTransitionTime":"2025-12-09T14:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.282808 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.282862 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.282901 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.282920 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.282930 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:47Z","lastTransitionTime":"2025-12-09T14:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.386053 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.386129 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.386144 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.386169 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.386186 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:47Z","lastTransitionTime":"2025-12-09T14:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.489073 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.489136 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.489153 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.489172 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.489185 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:47Z","lastTransitionTime":"2025-12-09T14:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.592163 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.592225 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.592235 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.592255 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.592271 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:47Z","lastTransitionTime":"2025-12-09T14:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.694902 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.694960 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.694970 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.694989 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.695003 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:47Z","lastTransitionTime":"2025-12-09T14:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.797793 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.797848 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.797867 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.797891 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.797908 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:47Z","lastTransitionTime":"2025-12-09T14:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.817175 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6xk48" Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.817208 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.817283 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:57:47 crc kubenswrapper[5107]: E1209 14:57:47.817446 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.817533 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:57:47 crc kubenswrapper[5107]: E1209 14:57:47.817610 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:57:47 crc kubenswrapper[5107]: E1209 14:57:47.817730 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6xk48" podUID="f154303d-e14b-4854-8f94-194d0f338f98" Dec 09 14:57:47 crc kubenswrapper[5107]: E1209 14:57:47.817838 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.900433 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.900488 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.900501 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.900523 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:47 crc kubenswrapper[5107]: I1209 14:57:47.900536 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:47Z","lastTransitionTime":"2025-12-09T14:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.002946 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.003540 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.003559 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.003579 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.003591 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:48Z","lastTransitionTime":"2025-12-09T14:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.048005 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" event={"ID":"b75d4675-9c37-47cf-8fa3-11097aa379ca","Type":"ContainerStarted","Data":"0910dc324553e84e9dcd17c953a31b1e5247d9a03b9f3acc69353f0a8162c2c3"} Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.048635 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.048710 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.075123 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" podStartSLOduration=95.075104894 podStartE2EDuration="1m35.075104894s" podCreationTimestamp="2025-12-09 14:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:57:48.074274231 +0000 UTC m=+115.797979140" watchObservedRunningTime="2025-12-09 14:57:48.075104894 +0000 UTC m=+115.798809783" Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.079770 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.108628 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.108682 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.108697 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.108716 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.108730 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:48Z","lastTransitionTime":"2025-12-09T14:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.210596 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.210667 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.210688 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.210717 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.210736 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:48Z","lastTransitionTime":"2025-12-09T14:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.313492 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.313547 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.313558 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.313577 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.313587 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:48Z","lastTransitionTime":"2025-12-09T14:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.416047 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.416094 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.416105 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.416126 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.416139 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:48Z","lastTransitionTime":"2025-12-09T14:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.518763 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.518854 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.518880 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.518909 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.518934 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:48Z","lastTransitionTime":"2025-12-09T14:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.620909 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.620972 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.620982 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.621000 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.621011 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:48Z","lastTransitionTime":"2025-12-09T14:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.723811 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.723862 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.723874 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.723893 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.723906 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:48Z","lastTransitionTime":"2025-12-09T14:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.825377 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.825438 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.825458 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.825477 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.825493 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:48Z","lastTransitionTime":"2025-12-09T14:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.927100 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.927172 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.927188 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.927205 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:48 crc kubenswrapper[5107]: I1209 14:57:48.927217 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:48Z","lastTransitionTime":"2025-12-09T14:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.029286 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.029374 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.029393 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.029411 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.029425 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:49Z","lastTransitionTime":"2025-12-09T14:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.052844 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.083926 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.132109 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.132167 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.132179 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.132199 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.132211 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:49Z","lastTransitionTime":"2025-12-09T14:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.234970 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.235027 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.235039 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.235057 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.235071 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:49Z","lastTransitionTime":"2025-12-09T14:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.337791 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.337850 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.337865 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.337890 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.337904 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:49Z","lastTransitionTime":"2025-12-09T14:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.441230 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.441294 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.441304 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.441319 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.441353 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:49Z","lastTransitionTime":"2025-12-09T14:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.558431 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.558486 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.558498 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.558519 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.558535 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:49Z","lastTransitionTime":"2025-12-09T14:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.661085 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.661129 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.661138 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.661153 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.661163 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:49Z","lastTransitionTime":"2025-12-09T14:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.763163 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.763229 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.763240 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.763259 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.763628 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:49Z","lastTransitionTime":"2025-12-09T14:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.817783 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.817783 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6xk48" Dec 09 14:57:49 crc kubenswrapper[5107]: E1209 14:57:49.817959 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:57:49 crc kubenswrapper[5107]: E1209 14:57:49.817982 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6xk48" podUID="f154303d-e14b-4854-8f94-194d0f338f98" Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.817815 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:57:49 crc kubenswrapper[5107]: E1209 14:57:49.818049 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.818065 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:57:49 crc kubenswrapper[5107]: E1209 14:57:49.818107 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.865101 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.865138 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.865148 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.865165 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.865174 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:49Z","lastTransitionTime":"2025-12-09T14:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.967223 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.967260 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.967268 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.967281 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:49 crc kubenswrapper[5107]: I1209 14:57:49.967290 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:49Z","lastTransitionTime":"2025-12-09T14:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.072649 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.072707 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.072721 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.072741 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.072755 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:50Z","lastTransitionTime":"2025-12-09T14:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.085395 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-6xk48"] Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.085522 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6xk48" Dec 09 14:57:50 crc kubenswrapper[5107]: E1209 14:57:50.085613 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6xk48" podUID="f154303d-e14b-4854-8f94-194d0f338f98" Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.174908 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.174951 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.174961 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.174977 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.174986 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:50Z","lastTransitionTime":"2025-12-09T14:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.277597 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.277654 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.277668 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.277693 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.277710 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:50Z","lastTransitionTime":"2025-12-09T14:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.380484 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.380547 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.380560 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.380574 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.380585 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:50Z","lastTransitionTime":"2025-12-09T14:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.482938 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.482976 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.482984 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.482999 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.483011 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:50Z","lastTransitionTime":"2025-12-09T14:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.585265 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.585352 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.585378 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.585405 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.585421 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:50Z","lastTransitionTime":"2025-12-09T14:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.688004 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.688057 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.688068 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.688084 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.688097 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:50Z","lastTransitionTime":"2025-12-09T14:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.790098 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.790172 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.790189 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.790211 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.790225 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:50Z","lastTransitionTime":"2025-12-09T14:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.832438 5107 scope.go:117] "RemoveContainer" containerID="dcfe82d78bf385817437f96e0cd90ec7f0e152520e5d2289ea61ded6ec88972e" Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.893104 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.893151 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.893161 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.893178 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.893189 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:50Z","lastTransitionTime":"2025-12-09T14:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.995573 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.995643 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.995658 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.995678 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:50 crc kubenswrapper[5107]: I1209 14:57:50.995690 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:50Z","lastTransitionTime":"2025-12-09T14:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.098585 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.098638 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.098648 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.098665 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.098676 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:51Z","lastTransitionTime":"2025-12-09T14:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.201554 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.201612 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.201625 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.201640 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.201652 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:51Z","lastTransitionTime":"2025-12-09T14:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.304058 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.304114 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.304124 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.304143 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.304157 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:51Z","lastTransitionTime":"2025-12-09T14:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.407107 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.407172 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.407188 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.407209 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.407225 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:51Z","lastTransitionTime":"2025-12-09T14:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.509854 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.509914 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.509930 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.509952 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.509969 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:51Z","lastTransitionTime":"2025-12-09T14:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.612451 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.612498 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.612508 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.612522 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.612532 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:51Z","lastTransitionTime":"2025-12-09T14:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.714982 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.715023 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.715031 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.715048 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.715058 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:51Z","lastTransitionTime":"2025-12-09T14:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.816765 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.816810 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.816822 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.816839 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.816850 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:51Z","lastTransitionTime":"2025-12-09T14:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.816871 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:57:51 crc kubenswrapper[5107]: E1209 14:57:51.816969 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.817007 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6xk48" Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.817012 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:57:51 crc kubenswrapper[5107]: E1209 14:57:51.817065 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6xk48" podUID="f154303d-e14b-4854-8f94-194d0f338f98" Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.817192 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:57:51 crc kubenswrapper[5107]: E1209 14:57:51.817248 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:57:51 crc kubenswrapper[5107]: E1209 14:57:51.817182 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.919509 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.919556 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.919568 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.919585 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:51 crc kubenswrapper[5107]: I1209 14:57:51.919598 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:51Z","lastTransitionTime":"2025-12-09T14:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.021544 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.021599 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.021612 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.021632 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.021646 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:52Z","lastTransitionTime":"2025-12-09T14:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.124475 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.124533 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.124550 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.124572 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.124587 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:52Z","lastTransitionTime":"2025-12-09T14:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.227162 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.227529 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.227581 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.227610 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.227630 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:52Z","lastTransitionTime":"2025-12-09T14:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.286802 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.286906 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.286922 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.286949 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.286969 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:57:52Z","lastTransitionTime":"2025-12-09T14:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.337687 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-sx9hn"] Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.641810 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-sx9hn" Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.644858 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.645106 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.645252 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.654905 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Dec 09 14:57:52 crc kubenswrapper[5107]: E1209 14:57:52.738708 5107 kubelet_node_status.go:509] "Node not becoming ready in time after startup" Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.796676 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b04d7b51-a75e-4a93-8fd0-ee8994f162e6-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-sx9hn\" (UID: \"b04d7b51-a75e-4a93-8fd0-ee8994f162e6\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-sx9hn" Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.796827 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b04d7b51-a75e-4a93-8fd0-ee8994f162e6-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-sx9hn\" (UID: \"b04d7b51-a75e-4a93-8fd0-ee8994f162e6\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-sx9hn" Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.796944 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b04d7b51-a75e-4a93-8fd0-ee8994f162e6-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-sx9hn\" (UID: \"b04d7b51-a75e-4a93-8fd0-ee8994f162e6\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-sx9hn" Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.797063 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b04d7b51-a75e-4a93-8fd0-ee8994f162e6-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-sx9hn\" (UID: \"b04d7b51-a75e-4a93-8fd0-ee8994f162e6\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-sx9hn" Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.797193 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b04d7b51-a75e-4a93-8fd0-ee8994f162e6-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-sx9hn\" (UID: \"b04d7b51-a75e-4a93-8fd0-ee8994f162e6\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-sx9hn" Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.813432 5107 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.828736 5107 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.898640 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b04d7b51-a75e-4a93-8fd0-ee8994f162e6-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-sx9hn\" (UID: \"b04d7b51-a75e-4a93-8fd0-ee8994f162e6\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-sx9hn" Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.899166 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b04d7b51-a75e-4a93-8fd0-ee8994f162e6-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-sx9hn\" (UID: \"b04d7b51-a75e-4a93-8fd0-ee8994f162e6\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-sx9hn" Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.899216 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b04d7b51-a75e-4a93-8fd0-ee8994f162e6-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-sx9hn\" (UID: \"b04d7b51-a75e-4a93-8fd0-ee8994f162e6\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-sx9hn" Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.899282 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b04d7b51-a75e-4a93-8fd0-ee8994f162e6-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-sx9hn\" (UID: \"b04d7b51-a75e-4a93-8fd0-ee8994f162e6\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-sx9hn" Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.899367 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b04d7b51-a75e-4a93-8fd0-ee8994f162e6-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-sx9hn\" (UID: \"b04d7b51-a75e-4a93-8fd0-ee8994f162e6\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-sx9hn" Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.899450 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b04d7b51-a75e-4a93-8fd0-ee8994f162e6-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-sx9hn\" (UID: \"b04d7b51-a75e-4a93-8fd0-ee8994f162e6\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-sx9hn" Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.899501 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b04d7b51-a75e-4a93-8fd0-ee8994f162e6-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-sx9hn\" (UID: \"b04d7b51-a75e-4a93-8fd0-ee8994f162e6\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-sx9hn" Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.902255 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.902472 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.918396 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Dec 09 14:57:52 crc kubenswrapper[5107]: E1209 14:57:52.931057 5107 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.931256 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b04d7b51-a75e-4a93-8fd0-ee8994f162e6-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-sx9hn\" (UID: \"b04d7b51-a75e-4a93-8fd0-ee8994f162e6\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-sx9hn" Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.935196 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b04d7b51-a75e-4a93-8fd0-ee8994f162e6-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-sx9hn\" (UID: \"b04d7b51-a75e-4a93-8fd0-ee8994f162e6\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-sx9hn" Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.935781 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b04d7b51-a75e-4a93-8fd0-ee8994f162e6-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-sx9hn\" (UID: \"b04d7b51-a75e-4a93-8fd0-ee8994f162e6\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-sx9hn" Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.969795 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Dec 09 14:57:52 crc kubenswrapper[5107]: I1209 14:57:52.977903 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-sx9hn" Dec 09 14:57:53 crc kubenswrapper[5107]: W1209 14:57:53.010907 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb04d7b51_a75e_4a93_8fd0_ee8994f162e6.slice/crio-c30ca9eee7dfa3a5cbf103758f4caa2ebd576994301085376c0ed85244f389f4 WatchSource:0}: Error finding container c30ca9eee7dfa3a5cbf103758f4caa2ebd576994301085376c0ed85244f389f4: Status 404 returned error can't find the container with id c30ca9eee7dfa3a5cbf103758f4caa2ebd576994301085376c0ed85244f389f4 Dec 09 14:57:53 crc kubenswrapper[5107]: I1209 14:57:53.068623 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-sx9hn" event={"ID":"b04d7b51-a75e-4a93-8fd0-ee8994f162e6","Type":"ContainerStarted","Data":"c30ca9eee7dfa3a5cbf103758f4caa2ebd576994301085376c0ed85244f389f4"} Dec 09 14:57:53 crc kubenswrapper[5107]: I1209 14:57:53.072599 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 09 14:57:53 crc kubenswrapper[5107]: I1209 14:57:53.074983 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"96cc1e2c0cd6007e9aa65ba5a09171100f571ede67492ed57351db6ae05db1fb"} Dec 09 14:57:53 crc kubenswrapper[5107]: I1209 14:57:53.076249 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:57:53 crc kubenswrapper[5107]: I1209 14:57:53.105866 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=39.105846536 podStartE2EDuration="39.105846536s" podCreationTimestamp="2025-12-09 14:57:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:57:53.105435225 +0000 UTC m=+120.829140124" watchObservedRunningTime="2025-12-09 14:57:53.105846536 +0000 UTC m=+120.829551425" Dec 09 14:57:53 crc kubenswrapper[5107]: I1209 14:57:53.816954 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:57:53 crc kubenswrapper[5107]: I1209 14:57:53.817038 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:57:53 crc kubenswrapper[5107]: I1209 14:57:53.817063 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:57:53 crc kubenswrapper[5107]: E1209 14:57:53.817493 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:57:53 crc kubenswrapper[5107]: E1209 14:57:53.817475 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:57:53 crc kubenswrapper[5107]: I1209 14:57:53.817096 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6xk48" Dec 09 14:57:53 crc kubenswrapper[5107]: E1209 14:57:53.817618 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:57:53 crc kubenswrapper[5107]: E1209 14:57:53.817885 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6xk48" podUID="f154303d-e14b-4854-8f94-194d0f338f98" Dec 09 14:57:54 crc kubenswrapper[5107]: I1209 14:57:54.081473 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-sx9hn" event={"ID":"b04d7b51-a75e-4a93-8fd0-ee8994f162e6","Type":"ContainerStarted","Data":"81d946a1fd5f33eb44f182ee5f95a7873c66b86b3f29e02d0ec81cddf9f24ce1"} Dec 09 14:57:54 crc kubenswrapper[5107]: I1209 14:57:54.099772 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-sx9hn" podStartSLOduration=101.099750206 podStartE2EDuration="1m41.099750206s" podCreationTimestamp="2025-12-09 14:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:57:54.098320547 +0000 UTC m=+121.822025446" watchObservedRunningTime="2025-12-09 14:57:54.099750206 +0000 UTC m=+121.823455095" Dec 09 14:57:55 crc kubenswrapper[5107]: I1209 14:57:55.817740 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6xk48" Dec 09 14:57:55 crc kubenswrapper[5107]: I1209 14:57:55.817799 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:57:55 crc kubenswrapper[5107]: I1209 14:57:55.817740 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:57:55 crc kubenswrapper[5107]: E1209 14:57:55.817889 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6xk48" podUID="f154303d-e14b-4854-8f94-194d0f338f98" Dec 09 14:57:55 crc kubenswrapper[5107]: E1209 14:57:55.817956 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:57:55 crc kubenswrapper[5107]: I1209 14:57:55.817984 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:57:55 crc kubenswrapper[5107]: E1209 14:57:55.818059 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:57:55 crc kubenswrapper[5107]: E1209 14:57:55.818426 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:57:57 crc kubenswrapper[5107]: I1209 14:57:57.817206 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:57:57 crc kubenswrapper[5107]: E1209 14:57:57.817402 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:57:57 crc kubenswrapper[5107]: I1209 14:57:57.817507 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6xk48" Dec 09 14:57:57 crc kubenswrapper[5107]: E1209 14:57:57.817734 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6xk48" podUID="f154303d-e14b-4854-8f94-194d0f338f98" Dec 09 14:57:57 crc kubenswrapper[5107]: I1209 14:57:57.817792 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:57:57 crc kubenswrapper[5107]: E1209 14:57:57.817869 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:57:57 crc kubenswrapper[5107]: I1209 14:57:57.818634 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:57:57 crc kubenswrapper[5107]: E1209 14:57:57.818908 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:57:59 crc kubenswrapper[5107]: I1209 14:57:59.817639 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6xk48" Dec 09 14:57:59 crc kubenswrapper[5107]: I1209 14:57:59.817709 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:57:59 crc kubenswrapper[5107]: I1209 14:57:59.817739 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:57:59 crc kubenswrapper[5107]: I1209 14:57:59.817670 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:57:59 crc kubenswrapper[5107]: I1209 14:57:59.821328 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 09 14:57:59 crc kubenswrapper[5107]: I1209 14:57:59.821824 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 09 14:57:59 crc kubenswrapper[5107]: I1209 14:57:59.822013 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 09 14:57:59 crc kubenswrapper[5107]: I1209 14:57:59.822119 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 09 14:57:59 crc kubenswrapper[5107]: I1209 14:57:59.822763 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 09 14:57:59 crc kubenswrapper[5107]: I1209 14:57:59.822828 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.551569 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.608522 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-bxvnm"] Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.638363 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-2vgtw"] Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.638707 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-bxvnm" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.641193 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.642131 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.643883 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.644152 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.644667 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-fttvh"] Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.644820 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-2vgtw" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.645638 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.647111 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.652121 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-fn2cf"] Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.664484 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.664695 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.665147 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.665229 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.665256 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.665301 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.665458 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.665636 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.666011 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.685911 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.696907 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-bg27m"] Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.697268 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-fttvh" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.697478 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-fn2cf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.702994 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-9rw2f"] Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.704019 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-bg27m" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.711767 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-9927m"] Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.713556 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.713846 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.715822 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.716888 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.717103 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.717222 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.717291 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.717441 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.717577 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.717604 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.717640 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.717688 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.718164 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-2vgtw\" (UID: \"8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-2vgtw" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.718218 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4c09bd5-8f7e-4af1-a374-a15f7c2db60d-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-fn2cf\" (UID: \"a4c09bd5-8f7e-4af1-a374-a15f7c2db60d\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-fn2cf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.718243 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4c09bd5-8f7e-4af1-a374-a15f7c2db60d-serving-cert\") pod \"authentication-operator-7f5c659b84-fn2cf\" (UID: \"a4c09bd5-8f7e-4af1-a374-a15f7c2db60d\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-fn2cf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.718282 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77fvx\" (UniqueName: \"kubernetes.io/projected/b47c5069-df03-4bb4-9b81-2213e9d95183-kube-api-access-77fvx\") pod \"downloads-747b44746d-bg27m\" (UID: \"b47c5069-df03-4bb4-9b81-2213e9d95183\") " pod="openshift-console/downloads-747b44746d-bg27m" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.718312 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/31deaf77-6b16-4eb8-9d1b-6f111794da2f-etcd-serving-ca\") pod \"apiserver-8596bd845d-bxvnm\" (UID: \"31deaf77-6b16-4eb8-9d1b-6f111794da2f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bxvnm" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.718354 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31deaf77-6b16-4eb8-9d1b-6f111794da2f-trusted-ca-bundle\") pod \"apiserver-8596bd845d-bxvnm\" (UID: \"31deaf77-6b16-4eb8-9d1b-6f111794da2f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bxvnm" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.718381 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df781e44-2e5e-440c-bf43-119be66a55f2-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-fttvh\" (UID: \"df781e44-2e5e-440c-bf43-119be66a55f2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-fttvh" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.718412 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/31deaf77-6b16-4eb8-9d1b-6f111794da2f-audit-dir\") pod \"apiserver-8596bd845d-bxvnm\" (UID: \"31deaf77-6b16-4eb8-9d1b-6f111794da2f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bxvnm" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.718454 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b-serving-cert\") pod \"controller-manager-65b6cccf98-2vgtw\" (UID: \"8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-2vgtw" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.718487 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b-tmp\") pod \"controller-manager-65b6cccf98-2vgtw\" (UID: \"8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-2vgtw" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.718557 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/31deaf77-6b16-4eb8-9d1b-6f111794da2f-encryption-config\") pod \"apiserver-8596bd845d-bxvnm\" (UID: \"31deaf77-6b16-4eb8-9d1b-6f111794da2f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bxvnm" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.718592 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j4vc\" (UniqueName: \"kubernetes.io/projected/31deaf77-6b16-4eb8-9d1b-6f111794da2f-kube-api-access-2j4vc\") pod \"apiserver-8596bd845d-bxvnm\" (UID: \"31deaf77-6b16-4eb8-9d1b-6f111794da2f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bxvnm" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.718624 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4c09bd5-8f7e-4af1-a374-a15f7c2db60d-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-fn2cf\" (UID: \"a4c09bd5-8f7e-4af1-a374-a15f7c2db60d\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-fn2cf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.718662 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m56j\" (UniqueName: \"kubernetes.io/projected/8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b-kube-api-access-7m56j\") pod \"controller-manager-65b6cccf98-2vgtw\" (UID: \"8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-2vgtw" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.718706 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df781e44-2e5e-440c-bf43-119be66a55f2-config\") pod \"openshift-apiserver-operator-846cbfc458-fttvh\" (UID: \"df781e44-2e5e-440c-bf43-119be66a55f2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-fttvh" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.718736 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhpm6\" (UniqueName: \"kubernetes.io/projected/a4c09bd5-8f7e-4af1-a374-a15f7c2db60d-kube-api-access-lhpm6\") pod \"authentication-operator-7f5c659b84-fn2cf\" (UID: \"a4c09bd5-8f7e-4af1-a374-a15f7c2db60d\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-fn2cf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.718768 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7q2k\" (UniqueName: \"kubernetes.io/projected/df781e44-2e5e-440c-bf43-119be66a55f2-kube-api-access-v7q2k\") pod \"openshift-apiserver-operator-846cbfc458-fttvh\" (UID: \"df781e44-2e5e-440c-bf43-119be66a55f2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-fttvh" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.718871 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b-client-ca\") pod \"controller-manager-65b6cccf98-2vgtw\" (UID: \"8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-2vgtw" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.718925 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31deaf77-6b16-4eb8-9d1b-6f111794da2f-serving-cert\") pod \"apiserver-8596bd845d-bxvnm\" (UID: \"31deaf77-6b16-4eb8-9d1b-6f111794da2f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bxvnm" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.718955 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4c09bd5-8f7e-4af1-a374-a15f7c2db60d-config\") pod \"authentication-operator-7f5c659b84-fn2cf\" (UID: \"a4c09bd5-8f7e-4af1-a374-a15f7c2db60d\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-fn2cf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.718996 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/31deaf77-6b16-4eb8-9d1b-6f111794da2f-audit-policies\") pod \"apiserver-8596bd845d-bxvnm\" (UID: \"31deaf77-6b16-4eb8-9d1b-6f111794da2f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bxvnm" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.719024 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b-config\") pod \"controller-manager-65b6cccf98-2vgtw\" (UID: \"8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-2vgtw" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.719090 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/31deaf77-6b16-4eb8-9d1b-6f111794da2f-etcd-client\") pod \"apiserver-8596bd845d-bxvnm\" (UID: \"31deaf77-6b16-4eb8-9d1b-6f111794da2f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bxvnm" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.721432 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.721729 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.722447 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.732584 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-plgtd"] Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.733764 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-9927m" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.736907 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.737413 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.737766 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.738028 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.740008 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-xdkgf"] Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.740462 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-9rw2f" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.740634 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.746184 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-cttpw"] Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.746284 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.746504 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.746721 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.746921 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.747131 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.747280 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.747291 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.747479 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.747513 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.747638 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.761137 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.762092 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.762616 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.762973 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.765907 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.766379 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.768511 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.771931 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.772561 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-8gpg6"] Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.775527 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-cttpw" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.775783 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-xdkgf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.792319 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.796615 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.796651 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.796697 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.806557 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.806625 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.806801 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.806883 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.806949 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.807051 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.807050 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.807284 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.807406 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.807498 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.807626 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.807633 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.809134 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-dmt84"] Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.809597 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.809615 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-8gpg6" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.814304 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-vkr2j"] Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.815376 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.817440 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-hv97n"] Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.820434 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-dmt84" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.822230 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/31deaf77-6b16-4eb8-9d1b-6f111794da2f-encryption-config\") pod \"apiserver-8596bd845d-bxvnm\" (UID: \"31deaf77-6b16-4eb8-9d1b-6f111794da2f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bxvnm" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.822282 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2j4vc\" (UniqueName: \"kubernetes.io/projected/31deaf77-6b16-4eb8-9d1b-6f111794da2f-kube-api-access-2j4vc\") pod \"apiserver-8596bd845d-bxvnm\" (UID: \"31deaf77-6b16-4eb8-9d1b-6f111794da2f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bxvnm" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.822315 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g77cf\" (UniqueName: \"kubernetes.io/projected/c68ff193-fd5f-4a85-8713-20ef57d86ab8-kube-api-access-g77cf\") pod \"apiserver-9ddfb9f55-xdkgf\" (UID: \"c68ff193-fd5f-4a85-8713-20ef57d86ab8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xdkgf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.822346 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-plgtd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.822370 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-plgtd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.822389 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d293f4be-8891-4515-b52d-35a61cddfc12-trusted-ca-bundle\") pod \"console-64d44f6ddf-cttpw\" (UID: \"d293f4be-8891-4515-b52d-35a61cddfc12\") " pod="openshift-console/console-64d44f6ddf-cttpw" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.822408 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c68ff193-fd5f-4a85-8713-20ef57d86ab8-etcd-client\") pod \"apiserver-9ddfb9f55-xdkgf\" (UID: \"c68ff193-fd5f-4a85-8713-20ef57d86ab8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xdkgf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.822424 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c68ff193-fd5f-4a85-8713-20ef57d86ab8-encryption-config\") pod \"apiserver-9ddfb9f55-xdkgf\" (UID: \"c68ff193-fd5f-4a85-8713-20ef57d86ab8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xdkgf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.822444 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4c09bd5-8f7e-4af1-a374-a15f7c2db60d-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-fn2cf\" (UID: \"a4c09bd5-8f7e-4af1-a374-a15f7c2db60d\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-fn2cf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.822463 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pj29\" (UniqueName: \"kubernetes.io/projected/d293f4be-8891-4515-b52d-35a61cddfc12-kube-api-access-9pj29\") pod \"console-64d44f6ddf-cttpw\" (UID: \"d293f4be-8891-4515-b52d-35a61cddfc12\") " pod="openshift-console/console-64d44f6ddf-cttpw" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.822480 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c68ff193-fd5f-4a85-8713-20ef57d86ab8-audit\") pod \"apiserver-9ddfb9f55-xdkgf\" (UID: \"c68ff193-fd5f-4a85-8713-20ef57d86ab8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xdkgf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.822507 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7m56j\" (UniqueName: \"kubernetes.io/projected/8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b-kube-api-access-7m56j\") pod \"controller-manager-65b6cccf98-2vgtw\" (UID: \"8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-2vgtw" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.822529 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df781e44-2e5e-440c-bf43-119be66a55f2-config\") pod \"openshift-apiserver-operator-846cbfc458-fttvh\" (UID: \"df781e44-2e5e-440c-bf43-119be66a55f2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-fttvh" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.822550 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lhpm6\" (UniqueName: \"kubernetes.io/projected/a4c09bd5-8f7e-4af1-a374-a15f7c2db60d-kube-api-access-lhpm6\") pod \"authentication-operator-7f5c659b84-fn2cf\" (UID: \"a4c09bd5-8f7e-4af1-a374-a15f7c2db60d\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-fn2cf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.822572 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v7q2k\" (UniqueName: \"kubernetes.io/projected/df781e44-2e5e-440c-bf43-119be66a55f2-kube-api-access-v7q2k\") pod \"openshift-apiserver-operator-846cbfc458-fttvh\" (UID: \"df781e44-2e5e-440c-bf43-119be66a55f2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-fttvh" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.822590 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b-client-ca\") pod \"controller-manager-65b6cccf98-2vgtw\" (UID: \"8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-2vgtw" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.822607 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d293f4be-8891-4515-b52d-35a61cddfc12-oauth-serving-cert\") pod \"console-64d44f6ddf-cttpw\" (UID: \"d293f4be-8891-4515-b52d-35a61cddfc12\") " pod="openshift-console/console-64d44f6ddf-cttpw" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.822624 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9ac2110c-ba1a-407f-bcb0-032edc5584f5-machine-approver-tls\") pod \"machine-approver-54c688565-8gpg6\" (UID: \"9ac2110c-ba1a-407f-bcb0-032edc5584f5\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-8gpg6" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.822642 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31deaf77-6b16-4eb8-9d1b-6f111794da2f-serving-cert\") pod \"apiserver-8596bd845d-bxvnm\" (UID: \"31deaf77-6b16-4eb8-9d1b-6f111794da2f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bxvnm" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.822708 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c68ff193-fd5f-4a85-8713-20ef57d86ab8-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-xdkgf\" (UID: \"c68ff193-fd5f-4a85-8713-20ef57d86ab8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xdkgf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.822728 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/834666aa-f503-44df-8377-77c8670167cd-audit-dir\") pod \"oauth-openshift-66458b6674-plgtd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.822746 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frxfp\" (UniqueName: \"kubernetes.io/projected/83ca1c49-458a-4ada-acdd-b7364abbf491-kube-api-access-frxfp\") pod \"openshift-config-operator-5777786469-9927m\" (UID: \"83ca1c49-458a-4ada-acdd-b7364abbf491\") " pod="openshift-config-operator/openshift-config-operator-5777786469-9927m" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.822819 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4c09bd5-8f7e-4af1-a374-a15f7c2db60d-config\") pod \"authentication-operator-7f5c659b84-fn2cf\" (UID: \"a4c09bd5-8f7e-4af1-a374-a15f7c2db60d\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-fn2cf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.822840 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/31deaf77-6b16-4eb8-9d1b-6f111794da2f-audit-policies\") pod \"apiserver-8596bd845d-bxvnm\" (UID: \"31deaf77-6b16-4eb8-9d1b-6f111794da2f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bxvnm" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.822866 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b-config\") pod \"controller-manager-65b6cccf98-2vgtw\" (UID: \"8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-2vgtw" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.822884 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ac2110c-ba1a-407f-bcb0-032edc5584f5-config\") pod \"machine-approver-54c688565-8gpg6\" (UID: \"9ac2110c-ba1a-407f-bcb0-032edc5584f5\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-8gpg6" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.822901 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d293f4be-8891-4515-b52d-35a61cddfc12-console-serving-cert\") pod \"console-64d44f6ddf-cttpw\" (UID: \"d293f4be-8891-4515-b52d-35a61cddfc12\") " pod="openshift-console/console-64d44f6ddf-cttpw" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.822919 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c68ff193-fd5f-4a85-8713-20ef57d86ab8-node-pullsecrets\") pod \"apiserver-9ddfb9f55-xdkgf\" (UID: \"c68ff193-fd5f-4a85-8713-20ef57d86ab8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xdkgf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.822941 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-plgtd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.822973 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/31deaf77-6b16-4eb8-9d1b-6f111794da2f-etcd-client\") pod \"apiserver-8596bd845d-bxvnm\" (UID: \"31deaf77-6b16-4eb8-9d1b-6f111794da2f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bxvnm" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.822991 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-plgtd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.823014 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/83ca1c49-458a-4ada-acdd-b7364abbf491-available-featuregates\") pod \"openshift-config-operator-5777786469-9927m\" (UID: \"83ca1c49-458a-4ada-acdd-b7364abbf491\") " pod="openshift-config-operator/openshift-config-operator-5777786469-9927m" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.823036 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c68ff193-fd5f-4a85-8713-20ef57d86ab8-config\") pod \"apiserver-9ddfb9f55-xdkgf\" (UID: \"c68ff193-fd5f-4a85-8713-20ef57d86ab8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xdkgf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.823053 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c68ff193-fd5f-4a85-8713-20ef57d86ab8-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-xdkgf\" (UID: \"c68ff193-fd5f-4a85-8713-20ef57d86ab8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xdkgf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.823069 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/834666aa-f503-44df-8377-77c8670167cd-audit-policies\") pod \"oauth-openshift-66458b6674-plgtd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.823097 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-2vgtw\" (UID: \"8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-2vgtw" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.823119 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d293f4be-8891-4515-b52d-35a61cddfc12-service-ca\") pod \"console-64d44f6ddf-cttpw\" (UID: \"d293f4be-8891-4515-b52d-35a61cddfc12\") " pod="openshift-console/console-64d44f6ddf-cttpw" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.823956 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4c09bd5-8f7e-4af1-a374-a15f7c2db60d-config\") pod \"authentication-operator-7f5c659b84-fn2cf\" (UID: \"a4c09bd5-8f7e-4af1-a374-a15f7c2db60d\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-fn2cf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.824321 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-2n527"] Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.825406 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-2vgtw\" (UID: \"8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-2vgtw" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.826310 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.826507 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.826644 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-vkr2j" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.828492 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.828745 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.828834 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.828910 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.829096 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.829180 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.829190 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df781e44-2e5e-440c-bf43-119be66a55f2-config\") pod \"openshift-apiserver-operator-846cbfc458-fttvh\" (UID: \"df781e44-2e5e-440c-bf43-119be66a55f2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-fttvh" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.830090 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/31deaf77-6b16-4eb8-9d1b-6f111794da2f-audit-policies\") pod \"apiserver-8596bd845d-bxvnm\" (UID: \"31deaf77-6b16-4eb8-9d1b-6f111794da2f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bxvnm" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.830802 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b-config\") pod \"controller-manager-65b6cccf98-2vgtw\" (UID: \"8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-2vgtw" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.831374 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4c09bd5-8f7e-4af1-a374-a15f7c2db60d-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-fn2cf\" (UID: \"a4c09bd5-8f7e-4af1-a374-a15f7c2db60d\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-fn2cf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.831430 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4c09bd5-8f7e-4af1-a374-a15f7c2db60d-serving-cert\") pod \"authentication-operator-7f5c659b84-fn2cf\" (UID: \"a4c09bd5-8f7e-4af1-a374-a15f7c2db60d\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-fn2cf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.831492 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-77fvx\" (UniqueName: \"kubernetes.io/projected/b47c5069-df03-4bb4-9b81-2213e9d95183-kube-api-access-77fvx\") pod \"downloads-747b44746d-bg27m\" (UID: \"b47c5069-df03-4bb4-9b81-2213e9d95183\") " pod="openshift-console/downloads-747b44746d-bg27m" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.831520 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83ca1c49-458a-4ada-acdd-b7364abbf491-serving-cert\") pod \"openshift-config-operator-5777786469-9927m\" (UID: \"83ca1c49-458a-4ada-acdd-b7364abbf491\") " pod="openshift-config-operator/openshift-config-operator-5777786469-9927m" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.831549 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d293f4be-8891-4515-b52d-35a61cddfc12-console-config\") pod \"console-64d44f6ddf-cttpw\" (UID: \"d293f4be-8891-4515-b52d-35a61cddfc12\") " pod="openshift-console/console-64d44f6ddf-cttpw" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.831571 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c68ff193-fd5f-4a85-8713-20ef57d86ab8-image-import-ca\") pod \"apiserver-9ddfb9f55-xdkgf\" (UID: \"c68ff193-fd5f-4a85-8713-20ef57d86ab8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xdkgf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.831601 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/31deaf77-6b16-4eb8-9d1b-6f111794da2f-etcd-serving-ca\") pod \"apiserver-8596bd845d-bxvnm\" (UID: \"31deaf77-6b16-4eb8-9d1b-6f111794da2f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bxvnm" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.831627 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31deaf77-6b16-4eb8-9d1b-6f111794da2f-trusted-ca-bundle\") pod \"apiserver-8596bd845d-bxvnm\" (UID: \"31deaf77-6b16-4eb8-9d1b-6f111794da2f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bxvnm" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.831647 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d293f4be-8891-4515-b52d-35a61cddfc12-console-oauth-config\") pod \"console-64d44f6ddf-cttpw\" (UID: \"d293f4be-8891-4515-b52d-35a61cddfc12\") " pod="openshift-console/console-64d44f6ddf-cttpw" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.831667 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-plgtd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.831699 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df781e44-2e5e-440c-bf43-119be66a55f2-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-fttvh\" (UID: \"df781e44-2e5e-440c-bf43-119be66a55f2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-fttvh" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.831778 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/31deaf77-6b16-4eb8-9d1b-6f111794da2f-audit-dir\") pod \"apiserver-8596bd845d-bxvnm\" (UID: \"31deaf77-6b16-4eb8-9d1b-6f111794da2f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bxvnm" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.831870 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/31deaf77-6b16-4eb8-9d1b-6f111794da2f-audit-dir\") pod \"apiserver-8596bd845d-bxvnm\" (UID: \"31deaf77-6b16-4eb8-9d1b-6f111794da2f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bxvnm" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.832693 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/31deaf77-6b16-4eb8-9d1b-6f111794da2f-etcd-serving-ca\") pod \"apiserver-8596bd845d-bxvnm\" (UID: \"31deaf77-6b16-4eb8-9d1b-6f111794da2f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bxvnm" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.833552 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-plgtd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.833648 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/31deaf77-6b16-4eb8-9d1b-6f111794da2f-encryption-config\") pod \"apiserver-8596bd845d-bxvnm\" (UID: \"31deaf77-6b16-4eb8-9d1b-6f111794da2f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bxvnm" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.833780 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-plgtd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.833835 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk7l6\" (UniqueName: \"kubernetes.io/projected/932af8ce-8f0e-42d8-8a9e-4f1464ca84aa-kube-api-access-nk7l6\") pod \"cluster-samples-operator-6b564684c8-9rw2f\" (UID: \"932af8ce-8f0e-42d8-8a9e-4f1464ca84aa\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-9rw2f" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.833893 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b-serving-cert\") pod \"controller-manager-65b6cccf98-2vgtw\" (UID: \"8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-2vgtw" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.833916 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c68ff193-fd5f-4a85-8713-20ef57d86ab8-audit-dir\") pod \"apiserver-9ddfb9f55-xdkgf\" (UID: \"c68ff193-fd5f-4a85-8713-20ef57d86ab8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xdkgf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.833941 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-plgtd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.833969 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8gq7\" (UniqueName: \"kubernetes.io/projected/834666aa-f503-44df-8377-77c8670167cd-kube-api-access-h8gq7\") pod \"oauth-openshift-66458b6674-plgtd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.833994 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9ac2110c-ba1a-407f-bcb0-032edc5584f5-auth-proxy-config\") pod \"machine-approver-54c688565-8gpg6\" (UID: \"9ac2110c-ba1a-407f-bcb0-032edc5584f5\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-8gpg6" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.834016 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p68gp\" (UniqueName: \"kubernetes.io/projected/9ac2110c-ba1a-407f-bcb0-032edc5584f5-kube-api-access-p68gp\") pod \"machine-approver-54c688565-8gpg6\" (UID: \"9ac2110c-ba1a-407f-bcb0-032edc5584f5\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-8gpg6" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.834039 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c68ff193-fd5f-4a85-8713-20ef57d86ab8-serving-cert\") pod \"apiserver-9ddfb9f55-xdkgf\" (UID: \"c68ff193-fd5f-4a85-8713-20ef57d86ab8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xdkgf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.834061 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-plgtd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.834086 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b-tmp\") pod \"controller-manager-65b6cccf98-2vgtw\" (UID: \"8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-2vgtw" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.834112 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-plgtd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.834136 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/932af8ce-8f0e-42d8-8a9e-4f1464ca84aa-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-9rw2f\" (UID: \"932af8ce-8f0e-42d8-8a9e-4f1464ca84aa\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-9rw2f" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.834163 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-plgtd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.834250 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.834385 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31deaf77-6b16-4eb8-9d1b-6f111794da2f-trusted-ca-bundle\") pod \"apiserver-8596bd845d-bxvnm\" (UID: \"31deaf77-6b16-4eb8-9d1b-6f111794da2f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bxvnm" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.838576 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.839571 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.839767 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.839926 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.840087 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.840092 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b-tmp\") pod \"controller-manager-65b6cccf98-2vgtw\" (UID: \"8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-2vgtw" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.840357 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.840555 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.841378 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4c09bd5-8f7e-4af1-a374-a15f7c2db60d-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-fn2cf\" (UID: \"a4c09bd5-8f7e-4af1-a374-a15f7c2db60d\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-fn2cf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.842494 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df781e44-2e5e-440c-bf43-119be66a55f2-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-fttvh\" (UID: \"df781e44-2e5e-440c-bf43-119be66a55f2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-fttvh" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.842514 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4c09bd5-8f7e-4af1-a374-a15f7c2db60d-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-fn2cf\" (UID: \"a4c09bd5-8f7e-4af1-a374-a15f7c2db60d\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-fn2cf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.843897 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31deaf77-6b16-4eb8-9d1b-6f111794da2f-serving-cert\") pod \"apiserver-8596bd845d-bxvnm\" (UID: \"31deaf77-6b16-4eb8-9d1b-6f111794da2f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bxvnm" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.844578 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b-client-ca\") pod \"controller-manager-65b6cccf98-2vgtw\" (UID: \"8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-2vgtw" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.845984 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.848549 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.853117 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4c09bd5-8f7e-4af1-a374-a15f7c2db60d-serving-cert\") pod \"authentication-operator-7f5c659b84-fn2cf\" (UID: \"a4c09bd5-8f7e-4af1-a374-a15f7c2db60d\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-fn2cf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.853547 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b-serving-cert\") pod \"controller-manager-65b6cccf98-2vgtw\" (UID: \"8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-2vgtw" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.853842 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/31deaf77-6b16-4eb8-9d1b-6f111794da2f-etcd-client\") pod \"apiserver-8596bd845d-bxvnm\" (UID: \"31deaf77-6b16-4eb8-9d1b-6f111794da2f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bxvnm" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.857752 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j4vc\" (UniqueName: \"kubernetes.io/projected/31deaf77-6b16-4eb8-9d1b-6f111794da2f-kube-api-access-2j4vc\") pod \"apiserver-8596bd845d-bxvnm\" (UID: \"31deaf77-6b16-4eb8-9d1b-6f111794da2f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-bxvnm" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.859029 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhpm6\" (UniqueName: \"kubernetes.io/projected/a4c09bd5-8f7e-4af1-a374-a15f7c2db60d-kube-api-access-lhpm6\") pod \"authentication-operator-7f5c659b84-fn2cf\" (UID: \"a4c09bd5-8f7e-4af1-a374-a15f7c2db60d\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-fn2cf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.861049 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7q2k\" (UniqueName: \"kubernetes.io/projected/df781e44-2e5e-440c-bf43-119be66a55f2-kube-api-access-v7q2k\") pod \"openshift-apiserver-operator-846cbfc458-fttvh\" (UID: \"df781e44-2e5e-440c-bf43-119be66a55f2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-fttvh" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.863824 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-77fvx\" (UniqueName: \"kubernetes.io/projected/b47c5069-df03-4bb4-9b81-2213e9d95183-kube-api-access-77fvx\") pod \"downloads-747b44746d-bg27m\" (UID: \"b47c5069-df03-4bb4-9b81-2213e9d95183\") " pod="openshift-console/downloads-747b44746d-bg27m" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.864839 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m56j\" (UniqueName: \"kubernetes.io/projected/8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b-kube-api-access-7m56j\") pod \"controller-manager-65b6cccf98-2vgtw\" (UID: \"8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-2vgtw" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.885623 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-9kn5t"] Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.885713 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-hv97n" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.886863 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-2n527" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.891600 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.892131 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.892500 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.892668 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.901183 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.908997 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.914310 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-9lxhm"] Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.921609 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.934954 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/932af8ce-8f0e-42d8-8a9e-4f1464ca84aa-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-9rw2f\" (UID: \"932af8ce-8f0e-42d8-8a9e-4f1464ca84aa\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-9rw2f" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.935016 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-plgtd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.935050 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g77cf\" (UniqueName: \"kubernetes.io/projected/c68ff193-fd5f-4a85-8713-20ef57d86ab8-kube-api-access-g77cf\") pod \"apiserver-9ddfb9f55-xdkgf\" (UID: \"c68ff193-fd5f-4a85-8713-20ef57d86ab8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xdkgf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.935075 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-plgtd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.935096 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-plgtd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.935120 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d293f4be-8891-4515-b52d-35a61cddfc12-trusted-ca-bundle\") pod \"console-64d44f6ddf-cttpw\" (UID: \"d293f4be-8891-4515-b52d-35a61cddfc12\") " pod="openshift-console/console-64d44f6ddf-cttpw" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.935226 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c68ff193-fd5f-4a85-8713-20ef57d86ab8-etcd-client\") pod \"apiserver-9ddfb9f55-xdkgf\" (UID: \"c68ff193-fd5f-4a85-8713-20ef57d86ab8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xdkgf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.935740 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c68ff193-fd5f-4a85-8713-20ef57d86ab8-encryption-config\") pod \"apiserver-9ddfb9f55-xdkgf\" (UID: \"c68ff193-fd5f-4a85-8713-20ef57d86ab8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xdkgf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.935778 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/edf7ee28-9ed2-48ba-b01e-db21605dc6d8-tmp-dir\") pod \"dns-operator-799b87ffcd-2n527\" (UID: \"edf7ee28-9ed2-48ba-b01e-db21605dc6d8\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-2n527" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.935834 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9pj29\" (UniqueName: \"kubernetes.io/projected/d293f4be-8891-4515-b52d-35a61cddfc12-kube-api-access-9pj29\") pod \"console-64d44f6ddf-cttpw\" (UID: \"d293f4be-8891-4515-b52d-35a61cddfc12\") " pod="openshift-console/console-64d44f6ddf-cttpw" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.935858 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c68ff193-fd5f-4a85-8713-20ef57d86ab8-audit\") pod \"apiserver-9ddfb9f55-xdkgf\" (UID: \"c68ff193-fd5f-4a85-8713-20ef57d86ab8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xdkgf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.935996 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d293f4be-8891-4515-b52d-35a61cddfc12-oauth-serving-cert\") pod \"console-64d44f6ddf-cttpw\" (UID: \"d293f4be-8891-4515-b52d-35a61cddfc12\") " pod="openshift-console/console-64d44f6ddf-cttpw" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.936023 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9ac2110c-ba1a-407f-bcb0-032edc5584f5-machine-approver-tls\") pod \"machine-approver-54c688565-8gpg6\" (UID: \"9ac2110c-ba1a-407f-bcb0-032edc5584f5\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-8gpg6" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.936068 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c68ff193-fd5f-4a85-8713-20ef57d86ab8-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-xdkgf\" (UID: \"c68ff193-fd5f-4a85-8713-20ef57d86ab8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xdkgf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.936085 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/834666aa-f503-44df-8377-77c8670167cd-audit-dir\") pod \"oauth-openshift-66458b6674-plgtd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.936104 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-frxfp\" (UniqueName: \"kubernetes.io/projected/83ca1c49-458a-4ada-acdd-b7364abbf491-kube-api-access-frxfp\") pod \"openshift-config-operator-5777786469-9927m\" (UID: \"83ca1c49-458a-4ada-acdd-b7364abbf491\") " pod="openshift-config-operator/openshift-config-operator-5777786469-9927m" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.936212 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/edf7ee28-9ed2-48ba-b01e-db21605dc6d8-metrics-tls\") pod \"dns-operator-799b87ffcd-2n527\" (UID: \"edf7ee28-9ed2-48ba-b01e-db21605dc6d8\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-2n527" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.936238 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ac2110c-ba1a-407f-bcb0-032edc5584f5-config\") pod \"machine-approver-54c688565-8gpg6\" (UID: \"9ac2110c-ba1a-407f-bcb0-032edc5584f5\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-8gpg6" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.936275 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d293f4be-8891-4515-b52d-35a61cddfc12-console-serving-cert\") pod \"console-64d44f6ddf-cttpw\" (UID: \"d293f4be-8891-4515-b52d-35a61cddfc12\") " pod="openshift-console/console-64d44f6ddf-cttpw" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.936305 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c68ff193-fd5f-4a85-8713-20ef57d86ab8-node-pullsecrets\") pod \"apiserver-9ddfb9f55-xdkgf\" (UID: \"c68ff193-fd5f-4a85-8713-20ef57d86ab8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xdkgf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.936410 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c68ff193-fd5f-4a85-8713-20ef57d86ab8-node-pullsecrets\") pod \"apiserver-9ddfb9f55-xdkgf\" (UID: \"c68ff193-fd5f-4a85-8713-20ef57d86ab8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xdkgf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.936469 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/834666aa-f503-44df-8377-77c8670167cd-audit-dir\") pod \"oauth-openshift-66458b6674-plgtd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.937211 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d293f4be-8891-4515-b52d-35a61cddfc12-oauth-serving-cert\") pod \"console-64d44f6ddf-cttpw\" (UID: \"d293f4be-8891-4515-b52d-35a61cddfc12\") " pod="openshift-console/console-64d44f6ddf-cttpw" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.937453 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ac2110c-ba1a-407f-bcb0-032edc5584f5-config\") pod \"machine-approver-54c688565-8gpg6\" (UID: \"9ac2110c-ba1a-407f-bcb0-032edc5584f5\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-8gpg6" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.937800 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-plgtd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.937857 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-plgtd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.937888 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/83ca1c49-458a-4ada-acdd-b7364abbf491-available-featuregates\") pod \"openshift-config-operator-5777786469-9927m\" (UID: \"83ca1c49-458a-4ada-acdd-b7364abbf491\") " pod="openshift-config-operator/openshift-config-operator-5777786469-9927m" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.937920 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c68ff193-fd5f-4a85-8713-20ef57d86ab8-config\") pod \"apiserver-9ddfb9f55-xdkgf\" (UID: \"c68ff193-fd5f-4a85-8713-20ef57d86ab8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xdkgf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.937921 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c68ff193-fd5f-4a85-8713-20ef57d86ab8-audit\") pod \"apiserver-9ddfb9f55-xdkgf\" (UID: \"c68ff193-fd5f-4a85-8713-20ef57d86ab8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xdkgf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.937947 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c68ff193-fd5f-4a85-8713-20ef57d86ab8-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-xdkgf\" (UID: \"c68ff193-fd5f-4a85-8713-20ef57d86ab8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xdkgf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.937971 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/834666aa-f503-44df-8377-77c8670167cd-audit-policies\") pod \"oauth-openshift-66458b6674-plgtd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.938087 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d293f4be-8891-4515-b52d-35a61cddfc12-service-ca\") pod \"console-64d44f6ddf-cttpw\" (UID: \"d293f4be-8891-4515-b52d-35a61cddfc12\") " pod="openshift-console/console-64d44f6ddf-cttpw" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.938137 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c68ff193-fd5f-4a85-8713-20ef57d86ab8-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-xdkgf\" (UID: \"c68ff193-fd5f-4a85-8713-20ef57d86ab8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xdkgf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.938091 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d293f4be-8891-4515-b52d-35a61cddfc12-trusted-ca-bundle\") pod \"console-64d44f6ddf-cttpw\" (UID: \"d293f4be-8891-4515-b52d-35a61cddfc12\") " pod="openshift-console/console-64d44f6ddf-cttpw" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.938195 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vhv6\" (UniqueName: \"kubernetes.io/projected/edf7ee28-9ed2-48ba-b01e-db21605dc6d8-kube-api-access-6vhv6\") pod \"dns-operator-799b87ffcd-2n527\" (UID: \"edf7ee28-9ed2-48ba-b01e-db21605dc6d8\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-2n527" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.938662 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/83ca1c49-458a-4ada-acdd-b7364abbf491-available-featuregates\") pod \"openshift-config-operator-5777786469-9927m\" (UID: \"83ca1c49-458a-4ada-acdd-b7364abbf491\") " pod="openshift-config-operator/openshift-config-operator-5777786469-9927m" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.938702 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/834666aa-f503-44df-8377-77c8670167cd-audit-policies\") pod \"oauth-openshift-66458b6674-plgtd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.939270 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83ca1c49-458a-4ada-acdd-b7364abbf491-serving-cert\") pod \"openshift-config-operator-5777786469-9927m\" (UID: \"83ca1c49-458a-4ada-acdd-b7364abbf491\") " pod="openshift-config-operator/openshift-config-operator-5777786469-9927m" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.939316 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d293f4be-8891-4515-b52d-35a61cddfc12-console-config\") pod \"console-64d44f6ddf-cttpw\" (UID: \"d293f4be-8891-4515-b52d-35a61cddfc12\") " pod="openshift-console/console-64d44f6ddf-cttpw" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.939322 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d293f4be-8891-4515-b52d-35a61cddfc12-service-ca\") pod \"console-64d44f6ddf-cttpw\" (UID: \"d293f4be-8891-4515-b52d-35a61cddfc12\") " pod="openshift-console/console-64d44f6ddf-cttpw" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.939367 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c68ff193-fd5f-4a85-8713-20ef57d86ab8-image-import-ca\") pod \"apiserver-9ddfb9f55-xdkgf\" (UID: \"c68ff193-fd5f-4a85-8713-20ef57d86ab8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xdkgf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.939371 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c68ff193-fd5f-4a85-8713-20ef57d86ab8-config\") pod \"apiserver-9ddfb9f55-xdkgf\" (UID: \"c68ff193-fd5f-4a85-8713-20ef57d86ab8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xdkgf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.939459 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d293f4be-8891-4515-b52d-35a61cddfc12-console-oauth-config\") pod \"console-64d44f6ddf-cttpw\" (UID: \"d293f4be-8891-4515-b52d-35a61cddfc12\") " pod="openshift-console/console-64d44f6ddf-cttpw" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.939494 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-plgtd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.939554 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-plgtd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.939557 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-plgtd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.939608 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-plgtd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.939632 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nk7l6\" (UniqueName: \"kubernetes.io/projected/932af8ce-8f0e-42d8-8a9e-4f1464ca84aa-kube-api-access-nk7l6\") pod \"cluster-samples-operator-6b564684c8-9rw2f\" (UID: \"932af8ce-8f0e-42d8-8a9e-4f1464ca84aa\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-9rw2f" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.939663 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c68ff193-fd5f-4a85-8713-20ef57d86ab8-audit-dir\") pod \"apiserver-9ddfb9f55-xdkgf\" (UID: \"c68ff193-fd5f-4a85-8713-20ef57d86ab8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xdkgf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.939683 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-plgtd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.939699 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h8gq7\" (UniqueName: \"kubernetes.io/projected/834666aa-f503-44df-8377-77c8670167cd-kube-api-access-h8gq7\") pod \"oauth-openshift-66458b6674-plgtd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.939719 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9ac2110c-ba1a-407f-bcb0-032edc5584f5-auth-proxy-config\") pod \"machine-approver-54c688565-8gpg6\" (UID: \"9ac2110c-ba1a-407f-bcb0-032edc5584f5\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-8gpg6" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.939737 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p68gp\" (UniqueName: \"kubernetes.io/projected/9ac2110c-ba1a-407f-bcb0-032edc5584f5-kube-api-access-p68gp\") pod \"machine-approver-54c688565-8gpg6\" (UID: \"9ac2110c-ba1a-407f-bcb0-032edc5584f5\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-8gpg6" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.939753 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c68ff193-fd5f-4a85-8713-20ef57d86ab8-serving-cert\") pod \"apiserver-9ddfb9f55-xdkgf\" (UID: \"c68ff193-fd5f-4a85-8713-20ef57d86ab8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xdkgf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.939770 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-plgtd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.939796 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-plgtd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.939997 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c68ff193-fd5f-4a85-8713-20ef57d86ab8-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-xdkgf\" (UID: \"c68ff193-fd5f-4a85-8713-20ef57d86ab8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xdkgf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.940177 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c68ff193-fd5f-4a85-8713-20ef57d86ab8-audit-dir\") pod \"apiserver-9ddfb9f55-xdkgf\" (UID: \"c68ff193-fd5f-4a85-8713-20ef57d86ab8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xdkgf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.940265 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d293f4be-8891-4515-b52d-35a61cddfc12-console-config\") pod \"console-64d44f6ddf-cttpw\" (UID: \"d293f4be-8891-4515-b52d-35a61cddfc12\") " pod="openshift-console/console-64d44f6ddf-cttpw" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.940354 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-plgtd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.940875 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9ac2110c-ba1a-407f-bcb0-032edc5584f5-auth-proxy-config\") pod \"machine-approver-54c688565-8gpg6\" (UID: \"9ac2110c-ba1a-407f-bcb0-032edc5584f5\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-8gpg6" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.941052 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-plgtd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.941312 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c68ff193-fd5f-4a85-8713-20ef57d86ab8-image-import-ca\") pod \"apiserver-9ddfb9f55-xdkgf\" (UID: \"c68ff193-fd5f-4a85-8713-20ef57d86ab8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xdkgf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.941343 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-plgtd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.941421 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-plgtd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.941666 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-plgtd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.942460 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.943181 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-plgtd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.943293 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c68ff193-fd5f-4a85-8713-20ef57d86ab8-encryption-config\") pod \"apiserver-9ddfb9f55-xdkgf\" (UID: \"c68ff193-fd5f-4a85-8713-20ef57d86ab8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xdkgf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.943442 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c68ff193-fd5f-4a85-8713-20ef57d86ab8-etcd-client\") pod \"apiserver-9ddfb9f55-xdkgf\" (UID: \"c68ff193-fd5f-4a85-8713-20ef57d86ab8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xdkgf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.943552 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d293f4be-8891-4515-b52d-35a61cddfc12-console-serving-cert\") pod \"console-64d44f6ddf-cttpw\" (UID: \"d293f4be-8891-4515-b52d-35a61cddfc12\") " pod="openshift-console/console-64d44f6ddf-cttpw" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.943849 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9ac2110c-ba1a-407f-bcb0-032edc5584f5-machine-approver-tls\") pod \"machine-approver-54c688565-8gpg6\" (UID: \"9ac2110c-ba1a-407f-bcb0-032edc5584f5\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-8gpg6" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.943917 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/932af8ce-8f0e-42d8-8a9e-4f1464ca84aa-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-9rw2f\" (UID: \"932af8ce-8f0e-42d8-8a9e-4f1464ca84aa\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-9rw2f" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.944453 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d293f4be-8891-4515-b52d-35a61cddfc12-console-oauth-config\") pod \"console-64d44f6ddf-cttpw\" (UID: \"d293f4be-8891-4515-b52d-35a61cddfc12\") " pod="openshift-console/console-64d44f6ddf-cttpw" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.944965 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83ca1c49-458a-4ada-acdd-b7364abbf491-serving-cert\") pod \"openshift-config-operator-5777786469-9927m\" (UID: \"83ca1c49-458a-4ada-acdd-b7364abbf491\") " pod="openshift-config-operator/openshift-config-operator-5777786469-9927m" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.945248 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c68ff193-fd5f-4a85-8713-20ef57d86ab8-serving-cert\") pod \"apiserver-9ddfb9f55-xdkgf\" (UID: \"c68ff193-fd5f-4a85-8713-20ef57d86ab8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xdkgf" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.945731 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-plgtd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.946106 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-plgtd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.947724 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-vx5zg"] Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.947933 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9lxhm" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.948172 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-plgtd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.948957 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-plgtd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.958884 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-bxvnm" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.960425 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.965856 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-zjd8q"] Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.965972 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-vx5zg" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.969514 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xhpfh"] Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.969581 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-zjd8q" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.976719 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-v7bbp"] Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.976827 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xhpfh" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.983755 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.985524 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-vnd9g"] Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.985644 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-2vgtw" Dec 09 14:58:02 crc kubenswrapper[5107]: I1209 14:58:02.985744 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-v7bbp" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.001960 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.016675 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-7hwmg"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.016869 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-vnd9g" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.018987 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-fttvh" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.023902 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.026143 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-fn2cf" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.037927 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-bg27m" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.040687 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/edf7ee28-9ed2-48ba-b01e-db21605dc6d8-tmp-dir\") pod \"dns-operator-799b87ffcd-2n527\" (UID: \"edf7ee28-9ed2-48ba-b01e-db21605dc6d8\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-2n527" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.040798 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/edf7ee28-9ed2-48ba-b01e-db21605dc6d8-metrics-tls\") pod \"dns-operator-799b87ffcd-2n527\" (UID: \"edf7ee28-9ed2-48ba-b01e-db21605dc6d8\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-2n527" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.041213 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/edf7ee28-9ed2-48ba-b01e-db21605dc6d8-tmp-dir\") pod \"dns-operator-799b87ffcd-2n527\" (UID: \"edf7ee28-9ed2-48ba-b01e-db21605dc6d8\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-2n527" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.041295 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6vhv6\" (UniqueName: \"kubernetes.io/projected/edf7ee28-9ed2-48ba-b01e-db21605dc6d8-kube-api-access-6vhv6\") pod \"dns-operator-799b87ffcd-2n527\" (UID: \"edf7ee28-9ed2-48ba-b01e-db21605dc6d8\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-2n527" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.044185 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.044194 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-9lrlj"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.044547 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-7hwmg" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.048996 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/edf7ee28-9ed2-48ba-b01e-db21605dc6d8-metrics-tls\") pod \"dns-operator-799b87ffcd-2n527\" (UID: \"edf7ee28-9ed2-48ba-b01e-db21605dc6d8\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-2n527" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.066684 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-9rw2f"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.066725 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-2vgtw"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.066740 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-fttvh"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.066757 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-ws4g6"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.067794 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9lrlj" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.072402 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-qgjdb"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.073146 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-ws4g6" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.080528 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-bxvnm"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.080965 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-9927m"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.080983 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-fn2cf"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.080996 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-bg27m"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.081013 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-rr44v"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.082862 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-qgjdb" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.085748 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-hv97n"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.085780 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-8hbcw"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.086481 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-rr44v" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.092895 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-bftlm"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.093237 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.093938 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-8hbcw" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.100814 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.121085 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.129498 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-xbd5v"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.132004 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-bftlm" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.132776 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-9jpg8"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.137599 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-bbh5g"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.137684 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-xbd5v" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.138172 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-9jpg8" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.141467 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.141566 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-cttpw"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.141589 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-fnsxn"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.141917 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-bbh5g" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.146647 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-22x2p"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.146955 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-fnsxn" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.150377 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-94qbq"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.150665 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-22x2p" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.159943 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-plgtd"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.159998 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421525-2b9xz"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.160433 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-94qbq" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.203809 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g77cf\" (UniqueName: \"kubernetes.io/projected/c68ff193-fd5f-4a85-8713-20ef57d86ab8-kube-api-access-g77cf\") pod \"apiserver-9ddfb9f55-xdkgf\" (UID: \"c68ff193-fd5f-4a85-8713-20ef57d86ab8\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xdkgf" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.229709 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pj29\" (UniqueName: \"kubernetes.io/projected/d293f4be-8891-4515-b52d-35a61cddfc12-kube-api-access-9pj29\") pod \"console-64d44f6ddf-cttpw\" (UID: \"d293f4be-8891-4515-b52d-35a61cddfc12\") " pod="openshift-console/console-64d44f6ddf-cttpw" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.233603 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-m6g6q"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.233833 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421525-2b9xz" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.238980 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-2kdw2"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.239200 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-m6g6q" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.241005 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-frxfp\" (UniqueName: \"kubernetes.io/projected/83ca1c49-458a-4ada-acdd-b7364abbf491-kube-api-access-frxfp\") pod \"openshift-config-operator-5777786469-9927m\" (UID: \"83ca1c49-458a-4ada-acdd-b7364abbf491\") " pod="openshift-config-operator/openshift-config-operator-5777786469-9927m" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.246003 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xhpfh"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.246032 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-2n527"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.246053 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-v7bbp"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.246066 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-9lrlj"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.246080 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-9kn5t"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.246094 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-g5bqc"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.247070 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-2kdw2" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.252777 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-xdkgf"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.252872 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-zjd8q"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.252887 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-vx5zg"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.253120 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-g5bqc" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.253444 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-xbd5v"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.253468 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-vkr2j"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.253482 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-ws4g6"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.253497 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-m6g6q"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.253511 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-8hbcw"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.253524 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-22x2p"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.253541 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-rr44v"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.253558 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421525-2b9xz"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.253571 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-vnd9g"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.253583 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-bbh5g"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.253597 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-9lxhm"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.253611 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-g5bqc"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.253624 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-9jpg8"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.253643 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-dmt84"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.253658 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-qgjdb"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.253673 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-vptvb"] Dec 09 14:58:03 crc kubenswrapper[5107]: W1209 14:58:03.254408 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31deaf77_6b16_4eb8_9d1b_6f111794da2f.slice/crio-e2a71a88c2925dc426ce95cbb53b9938fba37789c2ba5e9b7ef3719b20a78f53 WatchSource:0}: Error finding container e2a71a88c2925dc426ce95cbb53b9938fba37789c2ba5e9b7ef3719b20a78f53: Status 404 returned error can't find the container with id e2a71a88c2925dc426ce95cbb53b9938fba37789c2ba5e9b7ef3719b20a78f53 Dec 09 14:58:03 crc kubenswrapper[5107]: W1209 14:58:03.256200 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b8f3d5d_a12e_4845_aecf_253a1fe8cd0b.slice/crio-38456325b4bc014e6be346ec3e133bf7f3fcc1545e51ca6a5d93614e323577ff WatchSource:0}: Error finding container 38456325b4bc014e6be346ec3e133bf7f3fcc1545e51ca6a5d93614e323577ff: Status 404 returned error can't find the container with id 38456325b4bc014e6be346ec3e133bf7f3fcc1545e51ca6a5d93614e323577ff Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.258697 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8gq7\" (UniqueName: \"kubernetes.io/projected/834666aa-f503-44df-8377-77c8670167cd-kube-api-access-h8gq7\") pod \"oauth-openshift-66458b6674-plgtd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.259541 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-fnsxn"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.259577 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-v799x"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.260282 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-vptvb" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.285577 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk7l6\" (UniqueName: \"kubernetes.io/projected/932af8ce-8f0e-42d8-8a9e-4f1464ca84aa-kube-api-access-nk7l6\") pod \"cluster-samples-operator-6b564684c8-9rw2f\" (UID: \"932af8ce-8f0e-42d8-8a9e-4f1464ca84aa\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-9rw2f" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.296305 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p68gp\" (UniqueName: \"kubernetes.io/projected/9ac2110c-ba1a-407f-bcb0-032edc5584f5-kube-api-access-p68gp\") pod \"machine-approver-54c688565-8gpg6\" (UID: \"9ac2110c-ba1a-407f-bcb0-032edc5584f5\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-8gpg6" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.302190 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.312464 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-v799x"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.312515 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-94qbq"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.312537 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-bftlm"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.312553 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-2kdw2"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.312572 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-6scqm"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.312650 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-v799x" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.321716 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.322845 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-sxtbk"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.323677 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-6scqm" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.328301 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-6scqm"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.328431 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-2vgtw"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.328452 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-bxvnm"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.329819 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-sxtbk" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.331347 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-bg27m"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.342151 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.357282 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-fn2cf"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.360743 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.369431 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-9927m" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.380630 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.387429 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-9rw2f" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.401812 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.423046 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.424931 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.441541 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.461693 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.472519 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-fttvh"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.484178 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-cttpw" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.485883 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.489368 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-xdkgf" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.501949 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.509681 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-8gpg6" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.522936 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.541081 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.562739 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Dec 09 14:58:03 crc kubenswrapper[5107]: W1209 14:58:03.574985 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ac2110c_ba1a_407f_bcb0_032edc5584f5.slice/crio-14aa049f5ff29b680eed5a718831210996e72406decdc321579ddedaf43c2685 WatchSource:0}: Error finding container 14aa049f5ff29b680eed5a718831210996e72406decdc321579ddedaf43c2685: Status 404 returned error can't find the container with id 14aa049f5ff29b680eed5a718831210996e72406decdc321579ddedaf43c2685 Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.581650 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-9927m"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.583184 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.602171 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.612039 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-9rw2f"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.622955 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.640978 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.661982 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.681810 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.682114 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-plgtd"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.702666 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.721358 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.744083 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.761537 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.787502 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.795797 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-xdkgf"] Dec 09 14:58:03 crc kubenswrapper[5107]: W1209 14:58:03.810325 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc68ff193_fd5f_4a85_8713_20ef57d86ab8.slice/crio-2f3ba21d70897d61a2d4cdf963b56bb25bfecb3b0f57a57749b20f13f81d62c4 WatchSource:0}: Error finding container 2f3ba21d70897d61a2d4cdf963b56bb25bfecb3b0f57a57749b20f13f81d62c4: Status 404 returned error can't find the container with id 2f3ba21d70897d61a2d4cdf963b56bb25bfecb3b0f57a57749b20f13f81d62c4 Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.816533 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-cttpw"] Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.823823 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.828919 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vhv6\" (UniqueName: \"kubernetes.io/projected/edf7ee28-9ed2-48ba-b01e-db21605dc6d8-kube-api-access-6vhv6\") pod \"dns-operator-799b87ffcd-2n527\" (UID: \"edf7ee28-9ed2-48ba-b01e-db21605dc6d8\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-2n527" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.842712 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.844813 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-2n527" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.861789 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.881880 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.901020 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.921543 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.944952 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Dec 09 14:58:03 crc kubenswrapper[5107]: I1209 14:58:03.982537 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.002495 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.022038 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.043957 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.058498 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-2n527"] Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.062408 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.079465 5107 request.go:752] "Waited before sending request" delay="1.011159017s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.083684 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.087137 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.101356 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.121703 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Dec 09 14:58:04 crc kubenswrapper[5107]: W1209 14:58:04.124816 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedf7ee28_9ed2_48ba_b01e_db21605dc6d8.slice/crio-d06a50f00031e1ac23582ff4a6bb16a52dd2bbf1ffe77a5831e6907e3975a82f WatchSource:0}: Error finding container d06a50f00031e1ac23582ff4a6bb16a52dd2bbf1ffe77a5831e6907e3975a82f: Status 404 returned error can't find the container with id d06a50f00031e1ac23582ff4a6bb16a52dd2bbf1ffe77a5831e6907e3975a82f Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.133312 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-xdkgf" event={"ID":"c68ff193-fd5f-4a85-8713-20ef57d86ab8","Type":"ContainerStarted","Data":"2f3ba21d70897d61a2d4cdf963b56bb25bfecb3b0f57a57749b20f13f81d62c4"} Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.135284 5107 generic.go:358] "Generic (PLEG): container finished" podID="83ca1c49-458a-4ada-acdd-b7364abbf491" containerID="be8b8155f777b05de45f5a267ec0e35b561200769f93bb4816a863be9a9fb33b" exitCode=0 Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.135396 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-9927m" event={"ID":"83ca1c49-458a-4ada-acdd-b7364abbf491","Type":"ContainerDied","Data":"be8b8155f777b05de45f5a267ec0e35b561200769f93bb4816a863be9a9fb33b"} Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.135511 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-9927m" event={"ID":"83ca1c49-458a-4ada-acdd-b7364abbf491","Type":"ContainerStarted","Data":"67f5bb732525e2d04bcd3608f7762097f1edf30566b0a063a756474d4c88f096"} Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.138789 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-fn2cf" event={"ID":"a4c09bd5-8f7e-4af1-a374-a15f7c2db60d","Type":"ContainerStarted","Data":"d5e43bfba2c71f68fd8daccb66bdca652c9c8e2573a4d808f9929459fdfc1b55"} Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.138842 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-fn2cf" event={"ID":"a4c09bd5-8f7e-4af1-a374-a15f7c2db60d","Type":"ContainerStarted","Data":"80439dd3da86a5cda4f1456156bea52599136bbe165a6f2d925b644e736dafa0"} Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.141427 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.144062 5107 generic.go:358] "Generic (PLEG): container finished" podID="31deaf77-6b16-4eb8-9d1b-6f111794da2f" containerID="b23c7adc1c4e5102ce4251ba05d3cce80fe2de84306eab12c8971b671e12c4cd" exitCode=0 Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.144183 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-bxvnm" event={"ID":"31deaf77-6b16-4eb8-9d1b-6f111794da2f","Type":"ContainerDied","Data":"b23c7adc1c4e5102ce4251ba05d3cce80fe2de84306eab12c8971b671e12c4cd"} Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.144212 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-bxvnm" event={"ID":"31deaf77-6b16-4eb8-9d1b-6f111794da2f","Type":"ContainerStarted","Data":"e2a71a88c2925dc426ce95cbb53b9938fba37789c2ba5e9b7ef3719b20a78f53"} Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.145662 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" event={"ID":"834666aa-f503-44df-8377-77c8670167cd","Type":"ContainerStarted","Data":"c345a3444d4deef1e90144b46d6b5c84a184f0c0be4e8d8e02f91bd7e0a0ec6d"} Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.147524 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-2vgtw" event={"ID":"8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b","Type":"ContainerStarted","Data":"176ca282e13d097088a6db305dc4d8c5ad44cfa8ef7e13d68eb4678630c70512"} Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.147566 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-2vgtw" event={"ID":"8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b","Type":"ContainerStarted","Data":"38456325b4bc014e6be346ec3e133bf7f3fcc1545e51ca6a5d93614e323577ff"} Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.149772 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-9rw2f" event={"ID":"932af8ce-8f0e-42d8-8a9e-4f1464ca84aa","Type":"ContainerStarted","Data":"4a8e93f728d0bd311a04edf687b4216c9648010745a4daff0a225e295ef3836e"} Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.149824 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-9rw2f" event={"ID":"932af8ce-8f0e-42d8-8a9e-4f1464ca84aa","Type":"ContainerStarted","Data":"586244f6a8d1189d0d8a57bf643bf8448c43009ebd93eeec156ee12b7140c1ad"} Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.151203 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-bg27m" event={"ID":"b47c5069-df03-4bb4-9b81-2213e9d95183","Type":"ContainerStarted","Data":"d75f9108705b0615d3acaa0c456576407f54db90edd4cc0d344dacdd836d6fa5"} Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.151229 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-bg27m" event={"ID":"b47c5069-df03-4bb4-9b81-2213e9d95183","Type":"ContainerStarted","Data":"1b92f2bab2baf247b65bf79f699eee848bc3fc872cb90cf91c9114661858d582"} Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.152787 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-fttvh" event={"ID":"df781e44-2e5e-440c-bf43-119be66a55f2","Type":"ContainerStarted","Data":"6fc3d8f0fb5432bcd84455cadb44a1375381312c00d2516649cfe750e6ea7ab1"} Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.152813 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-fttvh" event={"ID":"df781e44-2e5e-440c-bf43-119be66a55f2","Type":"ContainerStarted","Data":"d32c868772f213e6837a34aa9f7b6f1a6effef862284ad9f1a7b7447eab7b3f4"} Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.154188 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-8gpg6" event={"ID":"9ac2110c-ba1a-407f-bcb0-032edc5584f5","Type":"ContainerStarted","Data":"2a894b1447bbbce907201c08092a1ffaa25bf2f0b1e6352d3e4ca2b2190b007a"} Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.154216 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-8gpg6" event={"ID":"9ac2110c-ba1a-407f-bcb0-032edc5584f5","Type":"ContainerStarted","Data":"14aa049f5ff29b680eed5a718831210996e72406decdc321579ddedaf43c2685"} Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.160491 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-cttpw" event={"ID":"d293f4be-8891-4515-b52d-35a61cddfc12","Type":"ContainerStarted","Data":"4de70ab7d13b65c38055a7226b82f42f0cf66602409580c66c2bfa7935f63cf2"} Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.160518 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-cttpw" event={"ID":"d293f4be-8891-4515-b52d-35a61cddfc12","Type":"ContainerStarted","Data":"f4621901bcde94d137d3003a9fd3b20f0892ac215578b61ed7986951b3b5b9f1"} Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.162156 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.182475 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.192499 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-2vgtw" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.194683 5107 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-2vgtw container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.194770 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-2vgtw" podUID="8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.200901 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.221864 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.245618 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.265769 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.283547 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.302805 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.322417 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.342893 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.361526 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.382442 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.401907 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.425950 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.443903 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.463395 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.482147 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.502737 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.522119 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.541469 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.564871 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.605973 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.606077 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.640409 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.641148 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.662930 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.681242 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.722802 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.742145 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.761262 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.767980 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bad4c91-a636-48b6-bd5a-dc4cae3d40ca-config\") pod \"console-operator-67c89758df-dmt84\" (UID: \"7bad4c91-a636-48b6-bd5a-dc4cae3d40ca\") " pod="openshift-console-operator/console-operator-67c89758df-dmt84" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.768038 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n84wh\" (UniqueName: \"kubernetes.io/projected/7bad4c91-a636-48b6-bd5a-dc4cae3d40ca-kube-api-access-n84wh\") pod \"console-operator-67c89758df-dmt84\" (UID: \"7bad4c91-a636-48b6-bd5a-dc4cae3d40ca\") " pod="openshift-console-operator/console-operator-67c89758df-dmt84" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.768116 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.768142 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5e817bc4-4ff6-435d-b70f-29459a1800fe-etcd-ca\") pod \"etcd-operator-69b85846b6-hv97n\" (UID: \"5e817bc4-4ff6-435d-b70f-29459a1800fe\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-hv97n" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.768180 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/26c5f65a-531c-40f2-a64f-95616e7abb9a-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-vkr2j\" (UID: \"26c5f65a-531c-40f2-a64f-95616e7abb9a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-vkr2j" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.768206 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/baa70a71-f986-4810-8d66-a6313df5d522-registry-tls\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.768390 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5e817bc4-4ff6-435d-b70f-29459a1800fe-etcd-service-ca\") pod \"etcd-operator-69b85846b6-hv97n\" (UID: \"5e817bc4-4ff6-435d-b70f-29459a1800fe\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-hv97n" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.768417 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e817bc4-4ff6-435d-b70f-29459a1800fe-config\") pod \"etcd-operator-69b85846b6-hv97n\" (UID: \"5e817bc4-4ff6-435d-b70f-29459a1800fe\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-hv97n" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.768453 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/baa70a71-f986-4810-8d66-a6313df5d522-registry-certificates\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.768475 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7bad4c91-a636-48b6-bd5a-dc4cae3d40ca-trusted-ca\") pod \"console-operator-67c89758df-dmt84\" (UID: \"7bad4c91-a636-48b6-bd5a-dc4cae3d40ca\") " pod="openshift-console-operator/console-operator-67c89758df-dmt84" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.768509 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmkd7\" (UniqueName: \"kubernetes.io/projected/baa70a71-f986-4810-8d66-a6313df5d522-kube-api-access-fmkd7\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.768530 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t75qd\" (UniqueName: \"kubernetes.io/projected/26c5f65a-531c-40f2-a64f-95616e7abb9a-kube-api-access-t75qd\") pod \"openshift-controller-manager-operator-686468bdd5-vkr2j\" (UID: \"26c5f65a-531c-40f2-a64f-95616e7abb9a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-vkr2j" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.768557 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5e817bc4-4ff6-435d-b70f-29459a1800fe-tmp-dir\") pod \"etcd-operator-69b85846b6-hv97n\" (UID: \"5e817bc4-4ff6-435d-b70f-29459a1800fe\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-hv97n" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.768591 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5e817bc4-4ff6-435d-b70f-29459a1800fe-etcd-client\") pod \"etcd-operator-69b85846b6-hv97n\" (UID: \"5e817bc4-4ff6-435d-b70f-29459a1800fe\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-hv97n" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.768713 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26c5f65a-531c-40f2-a64f-95616e7abb9a-config\") pod \"openshift-controller-manager-operator-686468bdd5-vkr2j\" (UID: \"26c5f65a-531c-40f2-a64f-95616e7abb9a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-vkr2j" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.768803 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/baa70a71-f986-4810-8d66-a6313df5d522-trusted-ca\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.768917 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7bad4c91-a636-48b6-bd5a-dc4cae3d40ca-serving-cert\") pod \"console-operator-67c89758df-dmt84\" (UID: \"7bad4c91-a636-48b6-bd5a-dc4cae3d40ca\") " pod="openshift-console-operator/console-operator-67c89758df-dmt84" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.769003 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e817bc4-4ff6-435d-b70f-29459a1800fe-serving-cert\") pod \"etcd-operator-69b85846b6-hv97n\" (UID: \"5e817bc4-4ff6-435d-b70f-29459a1800fe\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-hv97n" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.769111 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hxm4\" (UniqueName: \"kubernetes.io/projected/5e817bc4-4ff6-435d-b70f-29459a1800fe-kube-api-access-5hxm4\") pod \"etcd-operator-69b85846b6-hv97n\" (UID: \"5e817bc4-4ff6-435d-b70f-29459a1800fe\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-hv97n" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.769146 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26c5f65a-531c-40f2-a64f-95616e7abb9a-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-vkr2j\" (UID: \"26c5f65a-531c-40f2-a64f-95616e7abb9a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-vkr2j" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.769216 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/baa70a71-f986-4810-8d66-a6313df5d522-ca-trust-extracted\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.769241 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/baa70a71-f986-4810-8d66-a6313df5d522-installation-pull-secrets\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.769353 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/baa70a71-f986-4810-8d66-a6313df5d522-bound-sa-token\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:04 crc kubenswrapper[5107]: E1209 14:58:04.773393 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:05.273373676 +0000 UTC m=+132.997078565 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.780612 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.801269 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.824278 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.846569 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.860282 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.871029 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.871235 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/75da01f5-f3ad-49af-a574-6581b6a58ca2-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-bbh5g\" (UID: \"75da01f5-f3ad-49af-a574-6581b6a58ca2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-bbh5g" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.871291 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26c5f65a-531c-40f2-a64f-95616e7abb9a-config\") pod \"openshift-controller-manager-operator-686468bdd5-vkr2j\" (UID: \"26c5f65a-531c-40f2-a64f-95616e7abb9a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-vkr2j" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.871355 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g8td\" (UniqueName: \"kubernetes.io/projected/561ee952-c55f-43a8-bf1a-9aa3d3f7aafa-kube-api-access-6g8td\") pod \"ingress-canary-6scqm\" (UID: \"561ee952-c55f-43a8-bf1a-9aa3d3f7aafa\") " pod="openshift-ingress-canary/ingress-canary-6scqm" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.871388 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb028dc6-bfe0-4ca9-8e81-4b2a9b954524-client-ca\") pod \"route-controller-manager-776cdc94d6-9lrlj\" (UID: \"cb028dc6-bfe0-4ca9-8e81-4b2a9b954524\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9lrlj" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.871409 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2de0479-bb6f-4d4f-a2a3-4f145f4cc49d-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-v7bbp\" (UID: \"e2de0479-bb6f-4d4f-a2a3-4f145f4cc49d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-v7bbp" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.871425 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d59be405-9fc2-438f-aa97-c461c35a2f61-signing-key\") pod \"service-ca-74545575db-m6g6q\" (UID: \"d59be405-9fc2-438f-aa97-c461c35a2f61\") " pod="openshift-service-ca/service-ca-74545575db-m6g6q" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.871442 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wbbg\" (UniqueName: \"kubernetes.io/projected/804c8bba-5b68-4a3e-8060-e209d86f3d38-kube-api-access-7wbbg\") pod \"multus-admission-controller-69db94689b-8hbcw\" (UID: \"804c8bba-5b68-4a3e-8060-e209d86f3d38\") " pod="openshift-multus/multus-admission-controller-69db94689b-8hbcw" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.871460 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e8fd93d9-b829-40ad-8bf1-5e7d231b22fd-srv-cert\") pod \"olm-operator-5cdf44d969-bftlm\" (UID: \"e8fd93d9-b829-40ad-8bf1-5e7d231b22fd\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-bftlm" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.871476 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a00d0805-a28c-4412-a7c8-bc23c90e3bff-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-rr44v\" (UID: \"a00d0805-a28c-4412-a7c8-bc23c90e3bff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-rr44v" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.871494 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/0b29ecdf-6004-475e-8bcb-5fffa678a02b-ready\") pod \"cni-sysctl-allowlist-ds-vptvb\" (UID: \"0b29ecdf-6004-475e-8bcb-5fffa678a02b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vptvb" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.871512 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/894bdfb3-b1c7-419e-8f5c-7788f22807af-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-vnd9g\" (UID: \"894bdfb3-b1c7-419e-8f5c-7788f22807af\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-vnd9g" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.871529 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9pn9\" (UniqueName: \"kubernetes.io/projected/43ec8aae-6bc0-438d-84c5-63ef04ca4db9-kube-api-access-b9pn9\") pod \"router-default-68cf44c8b8-7hwmg\" (UID: \"43ec8aae-6bc0-438d-84c5-63ef04ca4db9\") " pod="openshift-ingress/router-default-68cf44c8b8-7hwmg" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.871547 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jglhd\" (UniqueName: \"kubernetes.io/projected/de306a6c-37e9-4adb-bd62-44825e0df8c1-kube-api-access-jglhd\") pod \"packageserver-7d4fc7d867-94qbq\" (UID: \"de306a6c-37e9-4adb-bd62-44825e0df8c1\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-94qbq" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.871568 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ng7r\" (UniqueName: \"kubernetes.io/projected/ecdcb3b6-9cf9-4930-b1f7-8cfcc94fa150-kube-api-access-9ng7r\") pod \"package-server-manager-77f986bd66-22x2p\" (UID: \"ecdcb3b6-9cf9-4930-b1f7-8cfcc94fa150\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-22x2p" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.871611 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e817bc4-4ff6-435d-b70f-29459a1800fe-serving-cert\") pod \"etcd-operator-69b85846b6-hv97n\" (UID: \"5e817bc4-4ff6-435d-b70f-29459a1800fe\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-hv97n" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.871640 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/75da01f5-f3ad-49af-a574-6581b6a58ca2-tmpfs\") pod \"catalog-operator-75ff9f647d-bbh5g\" (UID: \"75da01f5-f3ad-49af-a574-6581b6a58ca2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-bbh5g" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.871658 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a00d0805-a28c-4412-a7c8-bc23c90e3bff-config\") pod \"kube-storage-version-migrator-operator-565b79b866-rr44v\" (UID: \"a00d0805-a28c-4412-a7c8-bc23c90e3bff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-rr44v" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.871676 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c5c53b7a-ff7b-4dcc-8bf2-fe1a79939427-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-qgjdb\" (UID: \"c5c53b7a-ff7b-4dcc-8bf2-fe1a79939427\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-qgjdb" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.871702 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a67c3bfc-4a29-4bc3-b7ae-527064c5aeb9-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-9lxhm\" (UID: \"a67c3bfc-4a29-4bc3-b7ae-527064c5aeb9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9lxhm" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.871723 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c5c53b7a-ff7b-4dcc-8bf2-fe1a79939427-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-qgjdb\" (UID: \"c5c53b7a-ff7b-4dcc-8bf2-fe1a79939427\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-qgjdb" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.871739 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj9nj\" (UniqueName: \"kubernetes.io/projected/de384ce1-a016-4108-bb34-bf9475a09c66-kube-api-access-qj9nj\") pod \"collect-profiles-29421525-2b9xz\" (UID: \"de384ce1-a016-4108-bb34-bf9475a09c66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421525-2b9xz" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.871756 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzpvv\" (UniqueName: \"kubernetes.io/projected/e8fd93d9-b829-40ad-8bf1-5e7d231b22fd-kube-api-access-hzpvv\") pod \"olm-operator-5cdf44d969-bftlm\" (UID: \"e8fd93d9-b829-40ad-8bf1-5e7d231b22fd\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-bftlm" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.871771 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzqwg\" (UniqueName: \"kubernetes.io/projected/75da01f5-f3ad-49af-a574-6581b6a58ca2-kube-api-access-xzqwg\") pod \"catalog-operator-75ff9f647d-bbh5g\" (UID: \"75da01f5-f3ad-49af-a574-6581b6a58ca2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-bbh5g" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.871787 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjk82\" (UniqueName: \"kubernetes.io/projected/a967d951-470b-486f-8037-73dcbdb3e171-kube-api-access-cjk82\") pod \"machine-api-operator-755bb95488-vx5zg\" (UID: \"a967d951-470b-486f-8037-73dcbdb3e171\") " pod="openshift-machine-api/machine-api-operator-755bb95488-vx5zg" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.871806 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f50117dc-dbba-4bb9-9335-fc47f0b9ad48-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-fnsxn\" (UID: \"f50117dc-dbba-4bb9-9335-fc47f0b9ad48\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fnsxn" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.871835 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/baa70a71-f986-4810-8d66-a6313df5d522-ca-trust-extracted\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.871856 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/947d55c1-7cdf-48de-b10a-e783956ebbd8-csi-data-dir\") pod \"csi-hostpathplugin-g5bqc\" (UID: \"947d55c1-7cdf-48de-b10a-e783956ebbd8\") " pod="hostpath-provisioner/csi-hostpathplugin-g5bqc" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.871872 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a967d951-470b-486f-8037-73dcbdb3e171-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-vx5zg\" (UID: \"a967d951-470b-486f-8037-73dcbdb3e171\") " pod="openshift-machine-api/machine-api-operator-755bb95488-vx5zg" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.871889 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a67c3bfc-4a29-4bc3-b7ae-527064c5aeb9-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-9lxhm\" (UID: \"a67c3bfc-4a29-4bc3-b7ae-527064c5aeb9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9lxhm" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.871907 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f50117dc-dbba-4bb9-9335-fc47f0b9ad48-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-fnsxn\" (UID: \"f50117dc-dbba-4bb9-9335-fc47f0b9ad48\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fnsxn" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.871928 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/a67c3bfc-4a29-4bc3-b7ae-527064c5aeb9-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-9lxhm\" (UID: \"a67c3bfc-4a29-4bc3-b7ae-527064c5aeb9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9lxhm" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.871947 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bad4c91-a636-48b6-bd5a-dc4cae3d40ca-config\") pod \"console-operator-67c89758df-dmt84\" (UID: \"7bad4c91-a636-48b6-bd5a-dc4cae3d40ca\") " pod="openshift-console-operator/console-operator-67c89758df-dmt84" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.871966 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f78zz\" (UniqueName: \"kubernetes.io/projected/8b5927af-067e-4ec9-9774-89f6357ce9f1-kube-api-access-f78zz\") pod \"migrator-866fcbc849-xbd5v\" (UID: \"8b5927af-067e-4ec9-9774-89f6357ce9f1\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-xbd5v" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.871987 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ee76cc25-84c6-48a3-874b-4d310ae15a1e-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-zjd8q\" (UID: \"ee76cc25-84c6-48a3-874b-4d310ae15a1e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-zjd8q" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872004 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/894bdfb3-b1c7-419e-8f5c-7788f22807af-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-vnd9g\" (UID: \"894bdfb3-b1c7-419e-8f5c-7788f22807af\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-vnd9g" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872025 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5e817bc4-4ff6-435d-b70f-29459a1800fe-etcd-ca\") pod \"etcd-operator-69b85846b6-hv97n\" (UID: \"5e817bc4-4ff6-435d-b70f-29459a1800fe\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-hv97n" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872040 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e2de0479-bb6f-4d4f-a2a3-4f145f4cc49d-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-v7bbp\" (UID: \"e2de0479-bb6f-4d4f-a2a3-4f145f4cc49d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-v7bbp" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872055 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee76cc25-84c6-48a3-874b-4d310ae15a1e-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-zjd8q\" (UID: \"ee76cc25-84c6-48a3-874b-4d310ae15a1e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-zjd8q" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872069 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e8fd93d9-b829-40ad-8bf1-5e7d231b22fd-tmpfs\") pod \"olm-operator-5cdf44d969-bftlm\" (UID: \"e8fd93d9-b829-40ad-8bf1-5e7d231b22fd\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-bftlm" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872101 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/26c5f65a-531c-40f2-a64f-95616e7abb9a-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-vkr2j\" (UID: \"26c5f65a-531c-40f2-a64f-95616e7abb9a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-vkr2j" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872120 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/baa70a71-f986-4810-8d66-a6313df5d522-registry-tls\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872137 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee76cc25-84c6-48a3-874b-4d310ae15a1e-config\") pod \"kube-controller-manager-operator-69d5f845f8-zjd8q\" (UID: \"ee76cc25-84c6-48a3-874b-4d310ae15a1e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-zjd8q" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872154 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4ca34d01-f0a4-4610-a3d9-5e26da82c790-certs\") pod \"machine-config-server-sxtbk\" (UID: \"4ca34d01-f0a4-4610-a3d9-5e26da82c790\") " pod="openshift-machine-config-operator/machine-config-server-sxtbk" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872171 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4ca34d01-f0a4-4610-a3d9-5e26da82c790-node-bootstrap-token\") pod \"machine-config-server-sxtbk\" (UID: \"4ca34d01-f0a4-4610-a3d9-5e26da82c790\") " pod="openshift-machine-config-operator/machine-config-server-sxtbk" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872195 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a4d8034f-233a-444e-aeba-825cdedbff57-kube-api-access\") pod \"kube-apiserver-operator-575994946d-xhpfh\" (UID: \"a4d8034f-233a-444e-aeba-825cdedbff57\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xhpfh" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872217 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/947d55c1-7cdf-48de-b10a-e783956ebbd8-registration-dir\") pod \"csi-hostpathplugin-g5bqc\" (UID: \"947d55c1-7cdf-48de-b10a-e783956ebbd8\") " pod="hostpath-provisioner/csi-hostpathplugin-g5bqc" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872232 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a67c3bfc-4a29-4bc3-b7ae-527064c5aeb9-tmp\") pod \"cluster-image-registry-operator-86c45576b9-9lxhm\" (UID: \"a67c3bfc-4a29-4bc3-b7ae-527064c5aeb9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9lxhm" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872248 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnhhp\" (UniqueName: \"kubernetes.io/projected/a67c3bfc-4a29-4bc3-b7ae-527064c5aeb9-kube-api-access-hnhhp\") pod \"cluster-image-registry-operator-86c45576b9-9lxhm\" (UID: \"a67c3bfc-4a29-4bc3-b7ae-527064c5aeb9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9lxhm" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872266 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2de0479-bb6f-4d4f-a2a3-4f145f4cc49d-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-v7bbp\" (UID: \"e2de0479-bb6f-4d4f-a2a3-4f145f4cc49d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-v7bbp" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872282 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ecdcb3b6-9cf9-4930-b1f7-8cfcc94fa150-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-22x2p\" (UID: \"ecdcb3b6-9cf9-4930-b1f7-8cfcc94fa150\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-22x2p" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872301 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/43ec8aae-6bc0-438d-84c5-63ef04ca4db9-metrics-certs\") pod \"router-default-68cf44c8b8-7hwmg\" (UID: \"43ec8aae-6bc0-438d-84c5-63ef04ca4db9\") " pod="openshift-ingress/router-default-68cf44c8b8-7hwmg" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872324 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a67c3bfc-4a29-4bc3-b7ae-527064c5aeb9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-9lxhm\" (UID: \"a67c3bfc-4a29-4bc3-b7ae-527064c5aeb9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9lxhm" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872365 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsp5f\" (UniqueName: \"kubernetes.io/projected/c5c53b7a-ff7b-4dcc-8bf2-fe1a79939427-kube-api-access-bsp5f\") pod \"machine-config-operator-67c9d58cbb-qgjdb\" (UID: \"c5c53b7a-ff7b-4dcc-8bf2-fe1a79939427\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-qgjdb" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872387 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0b29ecdf-6004-475e-8bcb-5fffa678a02b-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-vptvb\" (UID: \"0b29ecdf-6004-475e-8bcb-5fffa678a02b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vptvb" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872414 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/baa70a71-f986-4810-8d66-a6313df5d522-registry-certificates\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872432 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2q6v\" (UniqueName: \"kubernetes.io/projected/d7249681-3c68-4e92-aa49-47edb51bfb04-kube-api-access-f2q6v\") pod \"machine-config-controller-f9cdd68f7-ws4g6\" (UID: \"d7249681-3c68-4e92-aa49-47edb51bfb04\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-ws4g6" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872447 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e8fd93d9-b829-40ad-8bf1-5e7d231b22fd-profile-collector-cert\") pod \"olm-operator-5cdf44d969-bftlm\" (UID: \"e8fd93d9-b829-40ad-8bf1-5e7d231b22fd\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-bftlm" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872465 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t75qd\" (UniqueName: \"kubernetes.io/projected/26c5f65a-531c-40f2-a64f-95616e7abb9a-kube-api-access-t75qd\") pod \"openshift-controller-manager-operator-686468bdd5-vkr2j\" (UID: \"26c5f65a-531c-40f2-a64f-95616e7abb9a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-vkr2j" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872483 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/947d55c1-7cdf-48de-b10a-e783956ebbd8-plugins-dir\") pod \"csi-hostpathplugin-g5bqc\" (UID: \"947d55c1-7cdf-48de-b10a-e783956ebbd8\") " pod="hostpath-provisioner/csi-hostpathplugin-g5bqc" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872509 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5e817bc4-4ff6-435d-b70f-29459a1800fe-tmp-dir\") pod \"etcd-operator-69b85846b6-hv97n\" (UID: \"5e817bc4-4ff6-435d-b70f-29459a1800fe\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-hv97n" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872531 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hbrq\" (UniqueName: \"kubernetes.io/projected/fa5aa762-dad4-4341-abc7-294aaa80993e-kube-api-access-2hbrq\") pod \"control-plane-machine-set-operator-75ffdb6fcd-9jpg8\" (UID: \"fa5aa762-dad4-4341-abc7-294aaa80993e\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-9jpg8" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872551 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhh5j\" (UniqueName: \"kubernetes.io/projected/cb028dc6-bfe0-4ca9-8e81-4b2a9b954524-kube-api-access-zhh5j\") pod \"route-controller-manager-776cdc94d6-9lrlj\" (UID: \"cb028dc6-bfe0-4ca9-8e81-4b2a9b954524\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9lrlj" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872584 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/de306a6c-37e9-4adb-bd62-44825e0df8c1-apiservice-cert\") pod \"packageserver-7d4fc7d867-94qbq\" (UID: \"de306a6c-37e9-4adb-bd62-44825e0df8c1\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-94qbq" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872602 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a4d8034f-233a-444e-aeba-825cdedbff57-tmp-dir\") pod \"kube-apiserver-operator-575994946d-xhpfh\" (UID: \"a4d8034f-233a-444e-aeba-825cdedbff57\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xhpfh" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872618 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkqq6\" (UniqueName: \"kubernetes.io/projected/0b29ecdf-6004-475e-8bcb-5fffa678a02b-kube-api-access-zkqq6\") pod \"cni-sysctl-allowlist-ds-vptvb\" (UID: \"0b29ecdf-6004-475e-8bcb-5fffa678a02b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vptvb" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872640 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee76cc25-84c6-48a3-874b-4d310ae15a1e-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-zjd8q\" (UID: \"ee76cc25-84c6-48a3-874b-4d310ae15a1e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-zjd8q" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872655 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/43ec8aae-6bc0-438d-84c5-63ef04ca4db9-stats-auth\") pod \"router-default-68cf44c8b8-7hwmg\" (UID: \"43ec8aae-6bc0-438d-84c5-63ef04ca4db9\") " pod="openshift-ingress/router-default-68cf44c8b8-7hwmg" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872678 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/804c8bba-5b68-4a3e-8060-e209d86f3d38-webhook-certs\") pod \"multus-admission-controller-69db94689b-8hbcw\" (UID: \"804c8bba-5b68-4a3e-8060-e209d86f3d38\") " pod="openshift-multus/multus-admission-controller-69db94689b-8hbcw" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872710 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbqt4\" (UniqueName: \"kubernetes.io/projected/eaeb35b9-8034-4a9d-9cd5-ca6d11970674-kube-api-access-gbqt4\") pod \"dns-default-v799x\" (UID: \"eaeb35b9-8034-4a9d-9cd5-ca6d11970674\") " pod="openshift-dns/dns-default-v799x" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872730 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/baa70a71-f986-4810-8d66-a6313df5d522-trusted-ca\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872749 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/de306a6c-37e9-4adb-bd62-44825e0df8c1-webhook-cert\") pod \"packageserver-7d4fc7d867-94qbq\" (UID: \"de306a6c-37e9-4adb-bd62-44825e0df8c1\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-94qbq" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872773 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c5c53b7a-ff7b-4dcc-8bf2-fe1a79939427-images\") pod \"machine-config-operator-67c9d58cbb-qgjdb\" (UID: \"c5c53b7a-ff7b-4dcc-8bf2-fe1a79939427\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-qgjdb" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872793 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7bad4c91-a636-48b6-bd5a-dc4cae3d40ca-serving-cert\") pod \"console-operator-67c89758df-dmt84\" (UID: \"7bad4c91-a636-48b6-bd5a-dc4cae3d40ca\") " pod="openshift-console-operator/console-operator-67c89758df-dmt84" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872808 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d7249681-3c68-4e92-aa49-47edb51bfb04-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-ws4g6\" (UID: \"d7249681-3c68-4e92-aa49-47edb51bfb04\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-ws4g6" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872825 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5135d4a-e7f3-4ba7-9758-ec80e3beb22e-config\") pod \"service-ca-operator-5b9c976747-2kdw2\" (UID: \"f5135d4a-e7f3-4ba7-9758-ec80e3beb22e\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-2kdw2" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872842 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/de306a6c-37e9-4adb-bd62-44825e0df8c1-tmpfs\") pod \"packageserver-7d4fc7d867-94qbq\" (UID: \"de306a6c-37e9-4adb-bd62-44825e0df8c1\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-94qbq" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872864 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eaeb35b9-8034-4a9d-9cd5-ca6d11970674-config-volume\") pod \"dns-default-v799x\" (UID: \"eaeb35b9-8034-4a9d-9cd5-ca6d11970674\") " pod="openshift-dns/dns-default-v799x" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872901 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26c5f65a-531c-40f2-a64f-95616e7abb9a-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-vkr2j\" (UID: \"26c5f65a-531c-40f2-a64f-95616e7abb9a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-vkr2j" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872924 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n84wh\" (UniqueName: \"kubernetes.io/projected/7bad4c91-a636-48b6-bd5a-dc4cae3d40ca-kube-api-access-n84wh\") pod \"console-operator-67c89758df-dmt84\" (UID: \"7bad4c91-a636-48b6-bd5a-dc4cae3d40ca\") " pod="openshift-console-operator/console-operator-67c89758df-dmt84" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872941 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knlr9\" (UniqueName: \"kubernetes.io/projected/4ca34d01-f0a4-4610-a3d9-5e26da82c790-kube-api-access-knlr9\") pod \"machine-config-server-sxtbk\" (UID: \"4ca34d01-f0a4-4610-a3d9-5e26da82c790\") " pod="openshift-machine-config-operator/machine-config-server-sxtbk" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872956 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43ec8aae-6bc0-438d-84c5-63ef04ca4db9-service-ca-bundle\") pod \"router-default-68cf44c8b8-7hwmg\" (UID: \"43ec8aae-6bc0-438d-84c5-63ef04ca4db9\") " pod="openshift-ingress/router-default-68cf44c8b8-7hwmg" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.872975 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/fa5aa762-dad4-4341-abc7-294aaa80993e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-9jpg8\" (UID: \"fa5aa762-dad4-4341-abc7-294aaa80993e\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-9jpg8" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.873000 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5hxm4\" (UniqueName: \"kubernetes.io/projected/5e817bc4-4ff6-435d-b70f-29459a1800fe-kube-api-access-5hxm4\") pod \"etcd-operator-69b85846b6-hv97n\" (UID: \"5e817bc4-4ff6-435d-b70f-29459a1800fe\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-hv97n" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.873017 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/43ec8aae-6bc0-438d-84c5-63ef04ca4db9-default-certificate\") pod \"router-default-68cf44c8b8-7hwmg\" (UID: \"43ec8aae-6bc0-438d-84c5-63ef04ca4db9\") " pod="openshift-ingress/router-default-68cf44c8b8-7hwmg" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.873034 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0b29ecdf-6004-475e-8bcb-5fffa678a02b-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-vptvb\" (UID: \"0b29ecdf-6004-475e-8bcb-5fffa678a02b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vptvb" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.873061 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/baa70a71-f986-4810-8d66-a6313df5d522-installation-pull-secrets\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.873078 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/894bdfb3-b1c7-419e-8f5c-7788f22807af-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-vnd9g\" (UID: \"894bdfb3-b1c7-419e-8f5c-7788f22807af\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-vnd9g" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.873107 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/baa70a71-f986-4810-8d66-a6313df5d522-bound-sa-token\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.873124 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4d8034f-233a-444e-aeba-825cdedbff57-config\") pod \"kube-apiserver-operator-575994946d-xhpfh\" (UID: \"a4d8034f-233a-444e-aeba-825cdedbff57\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xhpfh" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.873140 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/561ee952-c55f-43a8-bf1a-9aa3d3f7aafa-cert\") pod \"ingress-canary-6scqm\" (UID: \"561ee952-c55f-43a8-bf1a-9aa3d3f7aafa\") " pod="openshift-ingress-canary/ingress-canary-6scqm" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.873162 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de384ce1-a016-4108-bb34-bf9475a09c66-config-volume\") pod \"collect-profiles-29421525-2b9xz\" (UID: \"de384ce1-a016-4108-bb34-bf9475a09c66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421525-2b9xz" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.873177 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d59be405-9fc2-438f-aa97-c461c35a2f61-signing-cabundle\") pod \"service-ca-74545575db-m6g6q\" (UID: \"d59be405-9fc2-438f-aa97-c461c35a2f61\") " pod="openshift-service-ca/service-ca-74545575db-m6g6q" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.873201 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2de0479-bb6f-4d4f-a2a3-4f145f4cc49d-config\") pod \"openshift-kube-scheduler-operator-54f497555d-v7bbp\" (UID: \"e2de0479-bb6f-4d4f-a2a3-4f145f4cc49d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-v7bbp" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.873218 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/947d55c1-7cdf-48de-b10a-e783956ebbd8-socket-dir\") pod \"csi-hostpathplugin-g5bqc\" (UID: \"947d55c1-7cdf-48de-b10a-e783956ebbd8\") " pod="hostpath-provisioner/csi-hostpathplugin-g5bqc" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.873235 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dzgp\" (UniqueName: \"kubernetes.io/projected/f5135d4a-e7f3-4ba7-9758-ec80e3beb22e-kube-api-access-2dzgp\") pod \"service-ca-operator-5b9c976747-2kdw2\" (UID: \"f5135d4a-e7f3-4ba7-9758-ec80e3beb22e\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-2kdw2" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.873257 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a967d951-470b-486f-8037-73dcbdb3e171-images\") pod \"machine-api-operator-755bb95488-vx5zg\" (UID: \"a967d951-470b-486f-8037-73dcbdb3e171\") " pod="openshift-machine-api/machine-api-operator-755bb95488-vx5zg" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.873275 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sgtc\" (UniqueName: \"kubernetes.io/projected/f50117dc-dbba-4bb9-9335-fc47f0b9ad48-kube-api-access-6sgtc\") pod \"marketplace-operator-547dbd544d-fnsxn\" (UID: \"f50117dc-dbba-4bb9-9335-fc47f0b9ad48\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fnsxn" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.873297 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/de384ce1-a016-4108-bb34-bf9475a09c66-secret-volume\") pod \"collect-profiles-29421525-2b9xz\" (UID: \"de384ce1-a016-4108-bb34-bf9475a09c66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421525-2b9xz" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.873315 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrwsh\" (UniqueName: \"kubernetes.io/projected/d59be405-9fc2-438f-aa97-c461c35a2f61-kube-api-access-qrwsh\") pod \"service-ca-74545575db-m6g6q\" (UID: \"d59be405-9fc2-438f-aa97-c461c35a2f61\") " pod="openshift-service-ca/service-ca-74545575db-m6g6q" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.873352 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a967d951-470b-486f-8037-73dcbdb3e171-config\") pod \"machine-api-operator-755bb95488-vx5zg\" (UID: \"a967d951-470b-486f-8037-73dcbdb3e171\") " pod="openshift-machine-api/machine-api-operator-755bb95488-vx5zg" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.873390 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5e817bc4-4ff6-435d-b70f-29459a1800fe-etcd-service-ca\") pod \"etcd-operator-69b85846b6-hv97n\" (UID: \"5e817bc4-4ff6-435d-b70f-29459a1800fe\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-hv97n" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.873408 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cb028dc6-bfe0-4ca9-8e81-4b2a9b954524-tmp\") pod \"route-controller-manager-776cdc94d6-9lrlj\" (UID: \"cb028dc6-bfe0-4ca9-8e81-4b2a9b954524\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9lrlj" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.873424 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/eaeb35b9-8034-4a9d-9cd5-ca6d11970674-metrics-tls\") pod \"dns-default-v799x\" (UID: \"eaeb35b9-8034-4a9d-9cd5-ca6d11970674\") " pod="openshift-dns/dns-default-v799x" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.873439 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d7249681-3c68-4e92-aa49-47edb51bfb04-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-ws4g6\" (UID: \"d7249681-3c68-4e92-aa49-47edb51bfb04\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-ws4g6" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.873459 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/75da01f5-f3ad-49af-a574-6581b6a58ca2-srv-cert\") pod \"catalog-operator-75ff9f647d-bbh5g\" (UID: \"75da01f5-f3ad-49af-a574-6581b6a58ca2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-bbh5g" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.873477 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e817bc4-4ff6-435d-b70f-29459a1800fe-config\") pod \"etcd-operator-69b85846b6-hv97n\" (UID: \"5e817bc4-4ff6-435d-b70f-29459a1800fe\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-hv97n" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.873495 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5135d4a-e7f3-4ba7-9758-ec80e3beb22e-serving-cert\") pod \"service-ca-operator-5b9c976747-2kdw2\" (UID: \"f5135d4a-e7f3-4ba7-9758-ec80e3beb22e\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-2kdw2" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.873518 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7bad4c91-a636-48b6-bd5a-dc4cae3d40ca-trusted-ca\") pod \"console-operator-67c89758df-dmt84\" (UID: \"7bad4c91-a636-48b6-bd5a-dc4cae3d40ca\") " pod="openshift-console-operator/console-operator-67c89758df-dmt84" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.873534 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/947d55c1-7cdf-48de-b10a-e783956ebbd8-mountpoint-dir\") pod \"csi-hostpathplugin-g5bqc\" (UID: \"947d55c1-7cdf-48de-b10a-e783956ebbd8\") " pod="hostpath-provisioner/csi-hostpathplugin-g5bqc" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.873555 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fmkd7\" (UniqueName: \"kubernetes.io/projected/baa70a71-f986-4810-8d66-a6313df5d522-kube-api-access-fmkd7\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.873573 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/eaeb35b9-8034-4a9d-9cd5-ca6d11970674-tmp-dir\") pod \"dns-default-v799x\" (UID: \"eaeb35b9-8034-4a9d-9cd5-ca6d11970674\") " pod="openshift-dns/dns-default-v799x" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.873589 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zlkv\" (UniqueName: \"kubernetes.io/projected/a00d0805-a28c-4412-a7c8-bc23c90e3bff-kube-api-access-9zlkv\") pod \"kube-storage-version-migrator-operator-565b79b866-rr44v\" (UID: \"a00d0805-a28c-4412-a7c8-bc23c90e3bff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-rr44v" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.873605 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f50117dc-dbba-4bb9-9335-fc47f0b9ad48-tmp\") pod \"marketplace-operator-547dbd544d-fnsxn\" (UID: \"f50117dc-dbba-4bb9-9335-fc47f0b9ad48\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fnsxn" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.873633 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb028dc6-bfe0-4ca9-8e81-4b2a9b954524-config\") pod \"route-controller-manager-776cdc94d6-9lrlj\" (UID: \"cb028dc6-bfe0-4ca9-8e81-4b2a9b954524\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9lrlj" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.873650 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7vck\" (UniqueName: \"kubernetes.io/projected/894bdfb3-b1c7-419e-8f5c-7788f22807af-kube-api-access-j7vck\") pod \"ingress-operator-6b9cb4dbcf-vnd9g\" (UID: \"894bdfb3-b1c7-419e-8f5c-7788f22807af\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-vnd9g" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.873666 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j9cc\" (UniqueName: \"kubernetes.io/projected/947d55c1-7cdf-48de-b10a-e783956ebbd8-kube-api-access-7j9cc\") pod \"csi-hostpathplugin-g5bqc\" (UID: \"947d55c1-7cdf-48de-b10a-e783956ebbd8\") " pod="hostpath-provisioner/csi-hostpathplugin-g5bqc" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.873684 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5e817bc4-4ff6-435d-b70f-29459a1800fe-etcd-client\") pod \"etcd-operator-69b85846b6-hv97n\" (UID: \"5e817bc4-4ff6-435d-b70f-29459a1800fe\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-hv97n" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.873701 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb028dc6-bfe0-4ca9-8e81-4b2a9b954524-serving-cert\") pod \"route-controller-manager-776cdc94d6-9lrlj\" (UID: \"cb028dc6-bfe0-4ca9-8e81-4b2a9b954524\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9lrlj" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.873717 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4d8034f-233a-444e-aeba-825cdedbff57-serving-cert\") pod \"kube-apiserver-operator-575994946d-xhpfh\" (UID: \"a4d8034f-233a-444e-aeba-825cdedbff57\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xhpfh" Dec 09 14:58:04 crc kubenswrapper[5107]: E1209 14:58:04.873842 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:05.37382464 +0000 UTC m=+133.097529529 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.874473 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26c5f65a-531c-40f2-a64f-95616e7abb9a-config\") pod \"openshift-controller-manager-operator-686468bdd5-vkr2j\" (UID: \"26c5f65a-531c-40f2-a64f-95616e7abb9a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-vkr2j" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.876750 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/baa70a71-f986-4810-8d66-a6313df5d522-trusted-ca\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.876954 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e817bc4-4ff6-435d-b70f-29459a1800fe-config\") pod \"etcd-operator-69b85846b6-hv97n\" (UID: \"5e817bc4-4ff6-435d-b70f-29459a1800fe\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-hv97n" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.877011 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/baa70a71-f986-4810-8d66-a6313df5d522-ca-trust-extracted\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.877516 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7bad4c91-a636-48b6-bd5a-dc4cae3d40ca-trusted-ca\") pod \"console-operator-67c89758df-dmt84\" (UID: \"7bad4c91-a636-48b6-bd5a-dc4cae3d40ca\") " pod="openshift-console-operator/console-operator-67c89758df-dmt84" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.877831 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bad4c91-a636-48b6-bd5a-dc4cae3d40ca-config\") pod \"console-operator-67c89758df-dmt84\" (UID: \"7bad4c91-a636-48b6-bd5a-dc4cae3d40ca\") " pod="openshift-console-operator/console-operator-67c89758df-dmt84" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.878207 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/26c5f65a-531c-40f2-a64f-95616e7abb9a-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-vkr2j\" (UID: \"26c5f65a-531c-40f2-a64f-95616e7abb9a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-vkr2j" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.879730 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5e817bc4-4ff6-435d-b70f-29459a1800fe-tmp-dir\") pod \"etcd-operator-69b85846b6-hv97n\" (UID: \"5e817bc4-4ff6-435d-b70f-29459a1800fe\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-hv97n" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.880557 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5e817bc4-4ff6-435d-b70f-29459a1800fe-etcd-service-ca\") pod \"etcd-operator-69b85846b6-hv97n\" (UID: \"5e817bc4-4ff6-435d-b70f-29459a1800fe\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-hv97n" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.884625 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.885879 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26c5f65a-531c-40f2-a64f-95616e7abb9a-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-vkr2j\" (UID: \"26c5f65a-531c-40f2-a64f-95616e7abb9a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-vkr2j" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.886094 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5e817bc4-4ff6-435d-b70f-29459a1800fe-etcd-client\") pod \"etcd-operator-69b85846b6-hv97n\" (UID: \"5e817bc4-4ff6-435d-b70f-29459a1800fe\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-hv97n" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.886239 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/baa70a71-f986-4810-8d66-a6313df5d522-registry-certificates\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.887731 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/baa70a71-f986-4810-8d66-a6313df5d522-installation-pull-secrets\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.888765 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/baa70a71-f986-4810-8d66-a6313df5d522-registry-tls\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.895512 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e817bc4-4ff6-435d-b70f-29459a1800fe-serving-cert\") pod \"etcd-operator-69b85846b6-hv97n\" (UID: \"5e817bc4-4ff6-435d-b70f-29459a1800fe\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-hv97n" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.900812 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.902455 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5e817bc4-4ff6-435d-b70f-29459a1800fe-etcd-ca\") pod \"etcd-operator-69b85846b6-hv97n\" (UID: \"5e817bc4-4ff6-435d-b70f-29459a1800fe\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-hv97n" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.903250 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7bad4c91-a636-48b6-bd5a-dc4cae3d40ca-serving-cert\") pod \"console-operator-67c89758df-dmt84\" (UID: \"7bad4c91-a636-48b6-bd5a-dc4cae3d40ca\") " pod="openshift-console-operator/console-operator-67c89758df-dmt84" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.922218 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.944393 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.961615 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.981871 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.983610 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a67c3bfc-4a29-4bc3-b7ae-527064c5aeb9-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-9lxhm\" (UID: \"a67c3bfc-4a29-4bc3-b7ae-527064c5aeb9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9lxhm" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.983772 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f50117dc-dbba-4bb9-9335-fc47f0b9ad48-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-fnsxn\" (UID: \"f50117dc-dbba-4bb9-9335-fc47f0b9ad48\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fnsxn" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.983924 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/a67c3bfc-4a29-4bc3-b7ae-527064c5aeb9-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-9lxhm\" (UID: \"a67c3bfc-4a29-4bc3-b7ae-527064c5aeb9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9lxhm" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.984030 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f78zz\" (UniqueName: \"kubernetes.io/projected/8b5927af-067e-4ec9-9774-89f6357ce9f1-kube-api-access-f78zz\") pod \"migrator-866fcbc849-xbd5v\" (UID: \"8b5927af-067e-4ec9-9774-89f6357ce9f1\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-xbd5v" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.984131 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ee76cc25-84c6-48a3-874b-4d310ae15a1e-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-zjd8q\" (UID: \"ee76cc25-84c6-48a3-874b-4d310ae15a1e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-zjd8q" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.984234 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/894bdfb3-b1c7-419e-8f5c-7788f22807af-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-vnd9g\" (UID: \"894bdfb3-b1c7-419e-8f5c-7788f22807af\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-vnd9g" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.984357 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e2de0479-bb6f-4d4f-a2a3-4f145f4cc49d-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-v7bbp\" (UID: \"e2de0479-bb6f-4d4f-a2a3-4f145f4cc49d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-v7bbp" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.984488 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee76cc25-84c6-48a3-874b-4d310ae15a1e-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-zjd8q\" (UID: \"ee76cc25-84c6-48a3-874b-4d310ae15a1e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-zjd8q" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.984598 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e8fd93d9-b829-40ad-8bf1-5e7d231b22fd-tmpfs\") pod \"olm-operator-5cdf44d969-bftlm\" (UID: \"e8fd93d9-b829-40ad-8bf1-5e7d231b22fd\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-bftlm" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.984697 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee76cc25-84c6-48a3-874b-4d310ae15a1e-config\") pod \"kube-controller-manager-operator-69d5f845f8-zjd8q\" (UID: \"ee76cc25-84c6-48a3-874b-4d310ae15a1e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-zjd8q" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.984792 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4ca34d01-f0a4-4610-a3d9-5e26da82c790-certs\") pod \"machine-config-server-sxtbk\" (UID: \"4ca34d01-f0a4-4610-a3d9-5e26da82c790\") " pod="openshift-machine-config-operator/machine-config-server-sxtbk" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.984889 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4ca34d01-f0a4-4610-a3d9-5e26da82c790-node-bootstrap-token\") pod \"machine-config-server-sxtbk\" (UID: \"4ca34d01-f0a4-4610-a3d9-5e26da82c790\") " pod="openshift-machine-config-operator/machine-config-server-sxtbk" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.984998 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a4d8034f-233a-444e-aeba-825cdedbff57-kube-api-access\") pod \"kube-apiserver-operator-575994946d-xhpfh\" (UID: \"a4d8034f-233a-444e-aeba-825cdedbff57\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xhpfh" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.985127 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/947d55c1-7cdf-48de-b10a-e783956ebbd8-registration-dir\") pod \"csi-hostpathplugin-g5bqc\" (UID: \"947d55c1-7cdf-48de-b10a-e783956ebbd8\") " pod="hostpath-provisioner/csi-hostpathplugin-g5bqc" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.985238 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a67c3bfc-4a29-4bc3-b7ae-527064c5aeb9-tmp\") pod \"cluster-image-registry-operator-86c45576b9-9lxhm\" (UID: \"a67c3bfc-4a29-4bc3-b7ae-527064c5aeb9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9lxhm" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.985393 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hnhhp\" (UniqueName: \"kubernetes.io/projected/a67c3bfc-4a29-4bc3-b7ae-527064c5aeb9-kube-api-access-hnhhp\") pod \"cluster-image-registry-operator-86c45576b9-9lxhm\" (UID: \"a67c3bfc-4a29-4bc3-b7ae-527064c5aeb9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9lxhm" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.985544 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2de0479-bb6f-4d4f-a2a3-4f145f4cc49d-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-v7bbp\" (UID: \"e2de0479-bb6f-4d4f-a2a3-4f145f4cc49d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-v7bbp" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.985677 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ecdcb3b6-9cf9-4930-b1f7-8cfcc94fa150-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-22x2p\" (UID: \"ecdcb3b6-9cf9-4930-b1f7-8cfcc94fa150\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-22x2p" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.985800 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/43ec8aae-6bc0-438d-84c5-63ef04ca4db9-metrics-certs\") pod \"router-default-68cf44c8b8-7hwmg\" (UID: \"43ec8aae-6bc0-438d-84c5-63ef04ca4db9\") " pod="openshift-ingress/router-default-68cf44c8b8-7hwmg" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.985901 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a67c3bfc-4a29-4bc3-b7ae-527064c5aeb9-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-9lxhm\" (UID: \"a67c3bfc-4a29-4bc3-b7ae-527064c5aeb9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9lxhm" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.985923 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a67c3bfc-4a29-4bc3-b7ae-527064c5aeb9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-9lxhm\" (UID: \"a67c3bfc-4a29-4bc3-b7ae-527064c5aeb9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9lxhm" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.986131 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bsp5f\" (UniqueName: \"kubernetes.io/projected/c5c53b7a-ff7b-4dcc-8bf2-fe1a79939427-kube-api-access-bsp5f\") pod \"machine-config-operator-67c9d58cbb-qgjdb\" (UID: \"c5c53b7a-ff7b-4dcc-8bf2-fe1a79939427\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-qgjdb" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.986238 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ee76cc25-84c6-48a3-874b-4d310ae15a1e-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-zjd8q\" (UID: \"ee76cc25-84c6-48a3-874b-4d310ae15a1e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-zjd8q" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.986366 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0b29ecdf-6004-475e-8bcb-5fffa678a02b-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-vptvb\" (UID: \"0b29ecdf-6004-475e-8bcb-5fffa678a02b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vptvb" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.986473 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f2q6v\" (UniqueName: \"kubernetes.io/projected/d7249681-3c68-4e92-aa49-47edb51bfb04-kube-api-access-f2q6v\") pod \"machine-config-controller-f9cdd68f7-ws4g6\" (UID: \"d7249681-3c68-4e92-aa49-47edb51bfb04\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-ws4g6" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.986550 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e8fd93d9-b829-40ad-8bf1-5e7d231b22fd-profile-collector-cert\") pod \"olm-operator-5cdf44d969-bftlm\" (UID: \"e8fd93d9-b829-40ad-8bf1-5e7d231b22fd\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-bftlm" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.987936 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/947d55c1-7cdf-48de-b10a-e783956ebbd8-plugins-dir\") pod \"csi-hostpathplugin-g5bqc\" (UID: \"947d55c1-7cdf-48de-b10a-e783956ebbd8\") " pod="hostpath-provisioner/csi-hostpathplugin-g5bqc" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.988026 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2hbrq\" (UniqueName: \"kubernetes.io/projected/fa5aa762-dad4-4341-abc7-294aaa80993e-kube-api-access-2hbrq\") pod \"control-plane-machine-set-operator-75ffdb6fcd-9jpg8\" (UID: \"fa5aa762-dad4-4341-abc7-294aaa80993e\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-9jpg8" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.988110 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zhh5j\" (UniqueName: \"kubernetes.io/projected/cb028dc6-bfe0-4ca9-8e81-4b2a9b954524-kube-api-access-zhh5j\") pod \"route-controller-manager-776cdc94d6-9lrlj\" (UID: \"cb028dc6-bfe0-4ca9-8e81-4b2a9b954524\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9lrlj" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.988185 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/de306a6c-37e9-4adb-bd62-44825e0df8c1-apiservice-cert\") pod \"packageserver-7d4fc7d867-94qbq\" (UID: \"de306a6c-37e9-4adb-bd62-44825e0df8c1\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-94qbq" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.988254 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a4d8034f-233a-444e-aeba-825cdedbff57-tmp-dir\") pod \"kube-apiserver-operator-575994946d-xhpfh\" (UID: \"a4d8034f-233a-444e-aeba-825cdedbff57\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xhpfh" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.988306 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0b29ecdf-6004-475e-8bcb-5fffa678a02b-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-vptvb\" (UID: \"0b29ecdf-6004-475e-8bcb-5fffa678a02b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vptvb" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.988332 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zkqq6\" (UniqueName: \"kubernetes.io/projected/0b29ecdf-6004-475e-8bcb-5fffa678a02b-kube-api-access-zkqq6\") pod \"cni-sysctl-allowlist-ds-vptvb\" (UID: \"0b29ecdf-6004-475e-8bcb-5fffa678a02b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vptvb" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.988771 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee76cc25-84c6-48a3-874b-4d310ae15a1e-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-zjd8q\" (UID: \"ee76cc25-84c6-48a3-874b-4d310ae15a1e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-zjd8q" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.988937 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/43ec8aae-6bc0-438d-84c5-63ef04ca4db9-stats-auth\") pod \"router-default-68cf44c8b8-7hwmg\" (UID: \"43ec8aae-6bc0-438d-84c5-63ef04ca4db9\") " pod="openshift-ingress/router-default-68cf44c8b8-7hwmg" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.989064 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/804c8bba-5b68-4a3e-8060-e209d86f3d38-webhook-certs\") pod \"multus-admission-controller-69db94689b-8hbcw\" (UID: \"804c8bba-5b68-4a3e-8060-e209d86f3d38\") " pod="openshift-multus/multus-admission-controller-69db94689b-8hbcw" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.989216 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gbqt4\" (UniqueName: \"kubernetes.io/projected/eaeb35b9-8034-4a9d-9cd5-ca6d11970674-kube-api-access-gbqt4\") pod \"dns-default-v799x\" (UID: \"eaeb35b9-8034-4a9d-9cd5-ca6d11970674\") " pod="openshift-dns/dns-default-v799x" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.989350 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/de306a6c-37e9-4adb-bd62-44825e0df8c1-webhook-cert\") pod \"packageserver-7d4fc7d867-94qbq\" (UID: \"de306a6c-37e9-4adb-bd62-44825e0df8c1\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-94qbq" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.990842 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c5c53b7a-ff7b-4dcc-8bf2-fe1a79939427-images\") pod \"machine-config-operator-67c9d58cbb-qgjdb\" (UID: \"c5c53b7a-ff7b-4dcc-8bf2-fe1a79939427\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-qgjdb" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.991252 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d7249681-3c68-4e92-aa49-47edb51bfb04-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-ws4g6\" (UID: \"d7249681-3c68-4e92-aa49-47edb51bfb04\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-ws4g6" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.991422 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5135d4a-e7f3-4ba7-9758-ec80e3beb22e-config\") pod \"service-ca-operator-5b9c976747-2kdw2\" (UID: \"f5135d4a-e7f3-4ba7-9758-ec80e3beb22e\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-2kdw2" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.991525 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/de306a6c-37e9-4adb-bd62-44825e0df8c1-tmpfs\") pod \"packageserver-7d4fc7d867-94qbq\" (UID: \"de306a6c-37e9-4adb-bd62-44825e0df8c1\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-94qbq" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.991651 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eaeb35b9-8034-4a9d-9cd5-ca6d11970674-config-volume\") pod \"dns-default-v799x\" (UID: \"eaeb35b9-8034-4a9d-9cd5-ca6d11970674\") " pod="openshift-dns/dns-default-v799x" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.991790 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-knlr9\" (UniqueName: \"kubernetes.io/projected/4ca34d01-f0a4-4610-a3d9-5e26da82c790-kube-api-access-knlr9\") pod \"machine-config-server-sxtbk\" (UID: \"4ca34d01-f0a4-4610-a3d9-5e26da82c790\") " pod="openshift-machine-config-operator/machine-config-server-sxtbk" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.991882 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43ec8aae-6bc0-438d-84c5-63ef04ca4db9-service-ca-bundle\") pod \"router-default-68cf44c8b8-7hwmg\" (UID: \"43ec8aae-6bc0-438d-84c5-63ef04ca4db9\") " pod="openshift-ingress/router-default-68cf44c8b8-7hwmg" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.991982 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/fa5aa762-dad4-4341-abc7-294aaa80993e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-9jpg8\" (UID: \"fa5aa762-dad4-4341-abc7-294aaa80993e\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-9jpg8" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.992088 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/43ec8aae-6bc0-438d-84c5-63ef04ca4db9-default-certificate\") pod \"router-default-68cf44c8b8-7hwmg\" (UID: \"43ec8aae-6bc0-438d-84c5-63ef04ca4db9\") " pod="openshift-ingress/router-default-68cf44c8b8-7hwmg" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.992191 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0b29ecdf-6004-475e-8bcb-5fffa678a02b-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-vptvb\" (UID: \"0b29ecdf-6004-475e-8bcb-5fffa678a02b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vptvb" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.992275 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/894bdfb3-b1c7-419e-8f5c-7788f22807af-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-vnd9g\" (UID: \"894bdfb3-b1c7-419e-8f5c-7788f22807af\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-vnd9g" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.992580 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4d8034f-233a-444e-aeba-825cdedbff57-config\") pod \"kube-apiserver-operator-575994946d-xhpfh\" (UID: \"a4d8034f-233a-444e-aeba-825cdedbff57\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xhpfh" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.992665 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5135d4a-e7f3-4ba7-9758-ec80e3beb22e-config\") pod \"service-ca-operator-5b9c976747-2kdw2\" (UID: \"f5135d4a-e7f3-4ba7-9758-ec80e3beb22e\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-2kdw2" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.992667 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/561ee952-c55f-43a8-bf1a-9aa3d3f7aafa-cert\") pod \"ingress-canary-6scqm\" (UID: \"561ee952-c55f-43a8-bf1a-9aa3d3f7aafa\") " pod="openshift-ingress-canary/ingress-canary-6scqm" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.992753 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de384ce1-a016-4108-bb34-bf9475a09c66-config-volume\") pod \"collect-profiles-29421525-2b9xz\" (UID: \"de384ce1-a016-4108-bb34-bf9475a09c66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421525-2b9xz" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.992798 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d59be405-9fc2-438f-aa97-c461c35a2f61-signing-cabundle\") pod \"service-ca-74545575db-m6g6q\" (UID: \"d59be405-9fc2-438f-aa97-c461c35a2f61\") " pod="openshift-service-ca/service-ca-74545575db-m6g6q" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.992839 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.992860 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2de0479-bb6f-4d4f-a2a3-4f145f4cc49d-config\") pod \"openshift-kube-scheduler-operator-54f497555d-v7bbp\" (UID: \"e2de0479-bb6f-4d4f-a2a3-4f145f4cc49d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-v7bbp" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.992877 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/947d55c1-7cdf-48de-b10a-e783956ebbd8-socket-dir\") pod \"csi-hostpathplugin-g5bqc\" (UID: \"947d55c1-7cdf-48de-b10a-e783956ebbd8\") " pod="hostpath-provisioner/csi-hostpathplugin-g5bqc" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.992896 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2dzgp\" (UniqueName: \"kubernetes.io/projected/f5135d4a-e7f3-4ba7-9758-ec80e3beb22e-kube-api-access-2dzgp\") pod \"service-ca-operator-5b9c976747-2kdw2\" (UID: \"f5135d4a-e7f3-4ba7-9758-ec80e3beb22e\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-2kdw2" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.992921 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a967d951-470b-486f-8037-73dcbdb3e171-images\") pod \"machine-api-operator-755bb95488-vx5zg\" (UID: \"a967d951-470b-486f-8037-73dcbdb3e171\") " pod="openshift-machine-api/machine-api-operator-755bb95488-vx5zg" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.992939 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6sgtc\" (UniqueName: \"kubernetes.io/projected/f50117dc-dbba-4bb9-9335-fc47f0b9ad48-kube-api-access-6sgtc\") pod \"marketplace-operator-547dbd544d-fnsxn\" (UID: \"f50117dc-dbba-4bb9-9335-fc47f0b9ad48\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fnsxn" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.992965 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/de384ce1-a016-4108-bb34-bf9475a09c66-secret-volume\") pod \"collect-profiles-29421525-2b9xz\" (UID: \"de384ce1-a016-4108-bb34-bf9475a09c66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421525-2b9xz" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.992982 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qrwsh\" (UniqueName: \"kubernetes.io/projected/d59be405-9fc2-438f-aa97-c461c35a2f61-kube-api-access-qrwsh\") pod \"service-ca-74545575db-m6g6q\" (UID: \"d59be405-9fc2-438f-aa97-c461c35a2f61\") " pod="openshift-service-ca/service-ca-74545575db-m6g6q" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.993001 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a967d951-470b-486f-8037-73dcbdb3e171-config\") pod \"machine-api-operator-755bb95488-vx5zg\" (UID: \"a967d951-470b-486f-8037-73dcbdb3e171\") " pod="openshift-machine-api/machine-api-operator-755bb95488-vx5zg" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.993041 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cb028dc6-bfe0-4ca9-8e81-4b2a9b954524-tmp\") pod \"route-controller-manager-776cdc94d6-9lrlj\" (UID: \"cb028dc6-bfe0-4ca9-8e81-4b2a9b954524\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9lrlj" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.993062 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/eaeb35b9-8034-4a9d-9cd5-ca6d11970674-metrics-tls\") pod \"dns-default-v799x\" (UID: \"eaeb35b9-8034-4a9d-9cd5-ca6d11970674\") " pod="openshift-dns/dns-default-v799x" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.993084 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d7249681-3c68-4e92-aa49-47edb51bfb04-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-ws4g6\" (UID: \"d7249681-3c68-4e92-aa49-47edb51bfb04\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-ws4g6" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.993110 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/75da01f5-f3ad-49af-a574-6581b6a58ca2-srv-cert\") pod \"catalog-operator-75ff9f647d-bbh5g\" (UID: \"75da01f5-f3ad-49af-a574-6581b6a58ca2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-bbh5g" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.993129 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5135d4a-e7f3-4ba7-9758-ec80e3beb22e-serving-cert\") pod \"service-ca-operator-5b9c976747-2kdw2\" (UID: \"f5135d4a-e7f3-4ba7-9758-ec80e3beb22e\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-2kdw2" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.993154 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/947d55c1-7cdf-48de-b10a-e783956ebbd8-mountpoint-dir\") pod \"csi-hostpathplugin-g5bqc\" (UID: \"947d55c1-7cdf-48de-b10a-e783956ebbd8\") " pod="hostpath-provisioner/csi-hostpathplugin-g5bqc" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.993180 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/eaeb35b9-8034-4a9d-9cd5-ca6d11970674-tmp-dir\") pod \"dns-default-v799x\" (UID: \"eaeb35b9-8034-4a9d-9cd5-ca6d11970674\") " pod="openshift-dns/dns-default-v799x" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.993204 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9zlkv\" (UniqueName: \"kubernetes.io/projected/a00d0805-a28c-4412-a7c8-bc23c90e3bff-kube-api-access-9zlkv\") pod \"kube-storage-version-migrator-operator-565b79b866-rr44v\" (UID: \"a00d0805-a28c-4412-a7c8-bc23c90e3bff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-rr44v" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.993232 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f50117dc-dbba-4bb9-9335-fc47f0b9ad48-tmp\") pod \"marketplace-operator-547dbd544d-fnsxn\" (UID: \"f50117dc-dbba-4bb9-9335-fc47f0b9ad48\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fnsxn" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.993268 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb028dc6-bfe0-4ca9-8e81-4b2a9b954524-config\") pod \"route-controller-manager-776cdc94d6-9lrlj\" (UID: \"cb028dc6-bfe0-4ca9-8e81-4b2a9b954524\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9lrlj" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.993285 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j7vck\" (UniqueName: \"kubernetes.io/projected/894bdfb3-b1c7-419e-8f5c-7788f22807af-kube-api-access-j7vck\") pod \"ingress-operator-6b9cb4dbcf-vnd9g\" (UID: \"894bdfb3-b1c7-419e-8f5c-7788f22807af\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-vnd9g" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.993305 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7j9cc\" (UniqueName: \"kubernetes.io/projected/947d55c1-7cdf-48de-b10a-e783956ebbd8-kube-api-access-7j9cc\") pod \"csi-hostpathplugin-g5bqc\" (UID: \"947d55c1-7cdf-48de-b10a-e783956ebbd8\") " pod="hostpath-provisioner/csi-hostpathplugin-g5bqc" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.993425 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb028dc6-bfe0-4ca9-8e81-4b2a9b954524-serving-cert\") pod \"route-controller-manager-776cdc94d6-9lrlj\" (UID: \"cb028dc6-bfe0-4ca9-8e81-4b2a9b954524\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9lrlj" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.993450 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4d8034f-233a-444e-aeba-825cdedbff57-serving-cert\") pod \"kube-apiserver-operator-575994946d-xhpfh\" (UID: \"a4d8034f-233a-444e-aeba-825cdedbff57\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xhpfh" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.993478 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/75da01f5-f3ad-49af-a574-6581b6a58ca2-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-bbh5g\" (UID: \"75da01f5-f3ad-49af-a574-6581b6a58ca2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-bbh5g" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.993524 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6g8td\" (UniqueName: \"kubernetes.io/projected/561ee952-c55f-43a8-bf1a-9aa3d3f7aafa-kube-api-access-6g8td\") pod \"ingress-canary-6scqm\" (UID: \"561ee952-c55f-43a8-bf1a-9aa3d3f7aafa\") " pod="openshift-ingress-canary/ingress-canary-6scqm" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.993548 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb028dc6-bfe0-4ca9-8e81-4b2a9b954524-client-ca\") pod \"route-controller-manager-776cdc94d6-9lrlj\" (UID: \"cb028dc6-bfe0-4ca9-8e81-4b2a9b954524\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9lrlj" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.993580 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2de0479-bb6f-4d4f-a2a3-4f145f4cc49d-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-v7bbp\" (UID: \"e2de0479-bb6f-4d4f-a2a3-4f145f4cc49d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-v7bbp" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.993604 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d59be405-9fc2-438f-aa97-c461c35a2f61-signing-key\") pod \"service-ca-74545575db-m6g6q\" (UID: \"d59be405-9fc2-438f-aa97-c461c35a2f61\") " pod="openshift-service-ca/service-ca-74545575db-m6g6q" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.993644 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7wbbg\" (UniqueName: \"kubernetes.io/projected/804c8bba-5b68-4a3e-8060-e209d86f3d38-kube-api-access-7wbbg\") pod \"multus-admission-controller-69db94689b-8hbcw\" (UID: \"804c8bba-5b68-4a3e-8060-e209d86f3d38\") " pod="openshift-multus/multus-admission-controller-69db94689b-8hbcw" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.993668 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e8fd93d9-b829-40ad-8bf1-5e7d231b22fd-srv-cert\") pod \"olm-operator-5cdf44d969-bftlm\" (UID: \"e8fd93d9-b829-40ad-8bf1-5e7d231b22fd\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-bftlm" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.993687 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a00d0805-a28c-4412-a7c8-bc23c90e3bff-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-rr44v\" (UID: \"a00d0805-a28c-4412-a7c8-bc23c90e3bff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-rr44v" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.993705 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/0b29ecdf-6004-475e-8bcb-5fffa678a02b-ready\") pod \"cni-sysctl-allowlist-ds-vptvb\" (UID: \"0b29ecdf-6004-475e-8bcb-5fffa678a02b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vptvb" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.993726 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/894bdfb3-b1c7-419e-8f5c-7788f22807af-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-vnd9g\" (UID: \"894bdfb3-b1c7-419e-8f5c-7788f22807af\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-vnd9g" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.993750 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b9pn9\" (UniqueName: \"kubernetes.io/projected/43ec8aae-6bc0-438d-84c5-63ef04ca4db9-kube-api-access-b9pn9\") pod \"router-default-68cf44c8b8-7hwmg\" (UID: \"43ec8aae-6bc0-438d-84c5-63ef04ca4db9\") " pod="openshift-ingress/router-default-68cf44c8b8-7hwmg" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.993768 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jglhd\" (UniqueName: \"kubernetes.io/projected/de306a6c-37e9-4adb-bd62-44825e0df8c1-kube-api-access-jglhd\") pod \"packageserver-7d4fc7d867-94qbq\" (UID: \"de306a6c-37e9-4adb-bd62-44825e0df8c1\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-94qbq" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.993791 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9ng7r\" (UniqueName: \"kubernetes.io/projected/ecdcb3b6-9cf9-4930-b1f7-8cfcc94fa150-kube-api-access-9ng7r\") pod \"package-server-manager-77f986bd66-22x2p\" (UID: \"ecdcb3b6-9cf9-4930-b1f7-8cfcc94fa150\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-22x2p" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.993830 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/75da01f5-f3ad-49af-a574-6581b6a58ca2-tmpfs\") pod \"catalog-operator-75ff9f647d-bbh5g\" (UID: \"75da01f5-f3ad-49af-a574-6581b6a58ca2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-bbh5g" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.993856 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a00d0805-a28c-4412-a7c8-bc23c90e3bff-config\") pod \"kube-storage-version-migrator-operator-565b79b866-rr44v\" (UID: \"a00d0805-a28c-4412-a7c8-bc23c90e3bff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-rr44v" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.993883 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c5c53b7a-ff7b-4dcc-8bf2-fe1a79939427-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-qgjdb\" (UID: \"c5c53b7a-ff7b-4dcc-8bf2-fe1a79939427\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-qgjdb" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.993940 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a67c3bfc-4a29-4bc3-b7ae-527064c5aeb9-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-9lxhm\" (UID: \"a67c3bfc-4a29-4bc3-b7ae-527064c5aeb9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9lxhm" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.993965 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c5c53b7a-ff7b-4dcc-8bf2-fe1a79939427-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-qgjdb\" (UID: \"c5c53b7a-ff7b-4dcc-8bf2-fe1a79939427\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-qgjdb" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.993993 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qj9nj\" (UniqueName: \"kubernetes.io/projected/de384ce1-a016-4108-bb34-bf9475a09c66-kube-api-access-qj9nj\") pod \"collect-profiles-29421525-2b9xz\" (UID: \"de384ce1-a016-4108-bb34-bf9475a09c66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421525-2b9xz" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.994014 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hzpvv\" (UniqueName: \"kubernetes.io/projected/e8fd93d9-b829-40ad-8bf1-5e7d231b22fd-kube-api-access-hzpvv\") pod \"olm-operator-5cdf44d969-bftlm\" (UID: \"e8fd93d9-b829-40ad-8bf1-5e7d231b22fd\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-bftlm" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.994033 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xzqwg\" (UniqueName: \"kubernetes.io/projected/75da01f5-f3ad-49af-a574-6581b6a58ca2-kube-api-access-xzqwg\") pod \"catalog-operator-75ff9f647d-bbh5g\" (UID: \"75da01f5-f3ad-49af-a574-6581b6a58ca2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-bbh5g" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.994061 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cjk82\" (UniqueName: \"kubernetes.io/projected/a967d951-470b-486f-8037-73dcbdb3e171-kube-api-access-cjk82\") pod \"machine-api-operator-755bb95488-vx5zg\" (UID: \"a967d951-470b-486f-8037-73dcbdb3e171\") " pod="openshift-machine-api/machine-api-operator-755bb95488-vx5zg" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.994083 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f50117dc-dbba-4bb9-9335-fc47f0b9ad48-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-fnsxn\" (UID: \"f50117dc-dbba-4bb9-9335-fc47f0b9ad48\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fnsxn" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.994119 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/947d55c1-7cdf-48de-b10a-e783956ebbd8-csi-data-dir\") pod \"csi-hostpathplugin-g5bqc\" (UID: \"947d55c1-7cdf-48de-b10a-e783956ebbd8\") " pod="hostpath-provisioner/csi-hostpathplugin-g5bqc" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.994137 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a967d951-470b-486f-8037-73dcbdb3e171-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-vx5zg\" (UID: \"a967d951-470b-486f-8037-73dcbdb3e171\") " pod="openshift-machine-api/machine-api-operator-755bb95488-vx5zg" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.991478 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c5c53b7a-ff7b-4dcc-8bf2-fe1a79939427-images\") pod \"machine-config-operator-67c9d58cbb-qgjdb\" (UID: \"c5c53b7a-ff7b-4dcc-8bf2-fe1a79939427\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-qgjdb" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.987954 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a67c3bfc-4a29-4bc3-b7ae-527064c5aeb9-tmp\") pod \"cluster-image-registry-operator-86c45576b9-9lxhm\" (UID: \"a67c3bfc-4a29-4bc3-b7ae-527064c5aeb9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9lxhm" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.998060 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ecdcb3b6-9cf9-4930-b1f7-8cfcc94fa150-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-22x2p\" (UID: \"ecdcb3b6-9cf9-4930-b1f7-8cfcc94fa150\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-22x2p" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.987596 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e8fd93d9-b829-40ad-8bf1-5e7d231b22fd-tmpfs\") pod \"olm-operator-5cdf44d969-bftlm\" (UID: \"e8fd93d9-b829-40ad-8bf1-5e7d231b22fd\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-bftlm" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.998443 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43ec8aae-6bc0-438d-84c5-63ef04ca4db9-service-ca-bundle\") pod \"router-default-68cf44c8b8-7hwmg\" (UID: \"43ec8aae-6bc0-438d-84c5-63ef04ca4db9\") " pod="openshift-ingress/router-default-68cf44c8b8-7hwmg" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.998646 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/de306a6c-37e9-4adb-bd62-44825e0df8c1-tmpfs\") pod \"packageserver-7d4fc7d867-94qbq\" (UID: \"de306a6c-37e9-4adb-bd62-44825e0df8c1\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-94qbq" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.989795 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a4d8034f-233a-444e-aeba-825cdedbff57-tmp-dir\") pod \"kube-apiserver-operator-575994946d-xhpfh\" (UID: \"a4d8034f-233a-444e-aeba-825cdedbff57\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xhpfh" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.988120 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee76cc25-84c6-48a3-874b-4d310ae15a1e-config\") pod \"kube-controller-manager-operator-69d5f845f8-zjd8q\" (UID: \"ee76cc25-84c6-48a3-874b-4d310ae15a1e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-zjd8q" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.987470 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/947d55c1-7cdf-48de-b10a-e783956ebbd8-registration-dir\") pod \"csi-hostpathplugin-g5bqc\" (UID: \"947d55c1-7cdf-48de-b10a-e783956ebbd8\") " pod="hostpath-provisioner/csi-hostpathplugin-g5bqc" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.986899 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e2de0479-bb6f-4d4f-a2a3-4f145f4cc49d-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-v7bbp\" (UID: \"e2de0479-bb6f-4d4f-a2a3-4f145f4cc49d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-v7bbp" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.999575 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/804c8bba-5b68-4a3e-8060-e209d86f3d38-webhook-certs\") pod \"multus-admission-controller-69db94689b-8hbcw\" (UID: \"804c8bba-5b68-4a3e-8060-e209d86f3d38\") " pod="openshift-multus/multus-admission-controller-69db94689b-8hbcw" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.999632 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee76cc25-84c6-48a3-874b-4d310ae15a1e-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-zjd8q\" (UID: \"ee76cc25-84c6-48a3-874b-4d310ae15a1e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-zjd8q" Dec 09 14:58:04 crc kubenswrapper[5107]: I1209 14:58:04.999710 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/43ec8aae-6bc0-438d-84c5-63ef04ca4db9-metrics-certs\") pod \"router-default-68cf44c8b8-7hwmg\" (UID: \"43ec8aae-6bc0-438d-84c5-63ef04ca4db9\") " pod="openshift-ingress/router-default-68cf44c8b8-7hwmg" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.000134 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d59be405-9fc2-438f-aa97-c461c35a2f61-signing-cabundle\") pod \"service-ca-74545575db-m6g6q\" (UID: \"d59be405-9fc2-438f-aa97-c461c35a2f61\") " pod="openshift-service-ca/service-ca-74545575db-m6g6q" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.000285 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4d8034f-233a-444e-aeba-825cdedbff57-config\") pod \"kube-apiserver-operator-575994946d-xhpfh\" (UID: \"a4d8034f-233a-444e-aeba-825cdedbff57\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xhpfh" Dec 09 14:58:05 crc kubenswrapper[5107]: E1209 14:58:05.000645 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:05.500624826 +0000 UTC m=+133.224329915 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:04.989349 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/a67c3bfc-4a29-4bc3-b7ae-527064c5aeb9-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-9lxhm\" (UID: \"a67c3bfc-4a29-4bc3-b7ae-527064c5aeb9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9lxhm" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:04.989840 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/947d55c1-7cdf-48de-b10a-e783956ebbd8-plugins-dir\") pod \"csi-hostpathplugin-g5bqc\" (UID: \"947d55c1-7cdf-48de-b10a-e783956ebbd8\") " pod="hostpath-provisioner/csi-hostpathplugin-g5bqc" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.001014 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e8fd93d9-b829-40ad-8bf1-5e7d231b22fd-profile-collector-cert\") pod \"olm-operator-5cdf44d969-bftlm\" (UID: \"e8fd93d9-b829-40ad-8bf1-5e7d231b22fd\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-bftlm" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.001706 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de384ce1-a016-4108-bb34-bf9475a09c66-config-volume\") pod \"collect-profiles-29421525-2b9xz\" (UID: \"de384ce1-a016-4108-bb34-bf9475a09c66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421525-2b9xz" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.001886 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/947d55c1-7cdf-48de-b10a-e783956ebbd8-socket-dir\") pod \"csi-hostpathplugin-g5bqc\" (UID: \"947d55c1-7cdf-48de-b10a-e783956ebbd8\") " pod="hostpath-provisioner/csi-hostpathplugin-g5bqc" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.003314 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2de0479-bb6f-4d4f-a2a3-4f145f4cc49d-config\") pod \"openshift-kube-scheduler-operator-54f497555d-v7bbp\" (UID: \"e2de0479-bb6f-4d4f-a2a3-4f145f4cc49d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-v7bbp" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.004277 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d7249681-3c68-4e92-aa49-47edb51bfb04-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-ws4g6\" (UID: \"d7249681-3c68-4e92-aa49-47edb51bfb04\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-ws4g6" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.004980 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a967d951-470b-486f-8037-73dcbdb3e171-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-vx5zg\" (UID: \"a967d951-470b-486f-8037-73dcbdb3e171\") " pod="openshift-machine-api/machine-api-operator-755bb95488-vx5zg" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.006566 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a967d951-470b-486f-8037-73dcbdb3e171-images\") pod \"machine-api-operator-755bb95488-vx5zg\" (UID: \"a967d951-470b-486f-8037-73dcbdb3e171\") " pod="openshift-machine-api/machine-api-operator-755bb95488-vx5zg" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.008492 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/947d55c1-7cdf-48de-b10a-e783956ebbd8-csi-data-dir\") pod \"csi-hostpathplugin-g5bqc\" (UID: \"947d55c1-7cdf-48de-b10a-e783956ebbd8\") " pod="hostpath-provisioner/csi-hostpathplugin-g5bqc" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.008666 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.009124 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/75da01f5-f3ad-49af-a574-6581b6a58ca2-tmpfs\") pod \"catalog-operator-75ff9f647d-bbh5g\" (UID: \"75da01f5-f3ad-49af-a574-6581b6a58ca2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-bbh5g" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.009177 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f50117dc-dbba-4bb9-9335-fc47f0b9ad48-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-fnsxn\" (UID: \"f50117dc-dbba-4bb9-9335-fc47f0b9ad48\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fnsxn" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.008675 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/894bdfb3-b1c7-419e-8f5c-7788f22807af-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-vnd9g\" (UID: \"894bdfb3-b1c7-419e-8f5c-7788f22807af\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-vnd9g" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.012469 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb028dc6-bfe0-4ca9-8e81-4b2a9b954524-client-ca\") pod \"route-controller-manager-776cdc94d6-9lrlj\" (UID: \"cb028dc6-bfe0-4ca9-8e81-4b2a9b954524\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9lrlj" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.012866 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a967d951-470b-486f-8037-73dcbdb3e171-config\") pod \"machine-api-operator-755bb95488-vx5zg\" (UID: \"a967d951-470b-486f-8037-73dcbdb3e171\") " pod="openshift-machine-api/machine-api-operator-755bb95488-vx5zg" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.013281 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d7249681-3c68-4e92-aa49-47edb51bfb04-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-ws4g6\" (UID: \"d7249681-3c68-4e92-aa49-47edb51bfb04\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-ws4g6" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.014486 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/43ec8aae-6bc0-438d-84c5-63ef04ca4db9-stats-auth\") pod \"router-default-68cf44c8b8-7hwmg\" (UID: \"43ec8aae-6bc0-438d-84c5-63ef04ca4db9\") " pod="openshift-ingress/router-default-68cf44c8b8-7hwmg" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.014985 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/de306a6c-37e9-4adb-bd62-44825e0df8c1-apiservice-cert\") pod \"packageserver-7d4fc7d867-94qbq\" (UID: \"de306a6c-37e9-4adb-bd62-44825e0df8c1\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-94qbq" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.015527 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a00d0805-a28c-4412-a7c8-bc23c90e3bff-config\") pod \"kube-storage-version-migrator-operator-565b79b866-rr44v\" (UID: \"a00d0805-a28c-4412-a7c8-bc23c90e3bff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-rr44v" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.015642 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c5c53b7a-ff7b-4dcc-8bf2-fe1a79939427-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-qgjdb\" (UID: \"c5c53b7a-ff7b-4dcc-8bf2-fe1a79939427\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-qgjdb" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.016688 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb028dc6-bfe0-4ca9-8e81-4b2a9b954524-config\") pod \"route-controller-manager-776cdc94d6-9lrlj\" (UID: \"cb028dc6-bfe0-4ca9-8e81-4b2a9b954524\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9lrlj" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.016754 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cb028dc6-bfe0-4ca9-8e81-4b2a9b954524-tmp\") pod \"route-controller-manager-776cdc94d6-9lrlj\" (UID: \"cb028dc6-bfe0-4ca9-8e81-4b2a9b954524\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9lrlj" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.016771 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/947d55c1-7cdf-48de-b10a-e783956ebbd8-mountpoint-dir\") pod \"csi-hostpathplugin-g5bqc\" (UID: \"947d55c1-7cdf-48de-b10a-e783956ebbd8\") " pod="hostpath-provisioner/csi-hostpathplugin-g5bqc" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.017067 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/eaeb35b9-8034-4a9d-9cd5-ca6d11970674-tmp-dir\") pod \"dns-default-v799x\" (UID: \"eaeb35b9-8034-4a9d-9cd5-ca6d11970674\") " pod="openshift-dns/dns-default-v799x" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.017530 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f50117dc-dbba-4bb9-9335-fc47f0b9ad48-tmp\") pod \"marketplace-operator-547dbd544d-fnsxn\" (UID: \"f50117dc-dbba-4bb9-9335-fc47f0b9ad48\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fnsxn" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.017639 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/0b29ecdf-6004-475e-8bcb-5fffa678a02b-ready\") pod \"cni-sysctl-allowlist-ds-vptvb\" (UID: \"0b29ecdf-6004-475e-8bcb-5fffa678a02b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vptvb" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.019058 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5135d4a-e7f3-4ba7-9758-ec80e3beb22e-serving-cert\") pod \"service-ca-operator-5b9c976747-2kdw2\" (UID: \"f5135d4a-e7f3-4ba7-9758-ec80e3beb22e\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-2kdw2" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.019131 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/894bdfb3-b1c7-419e-8f5c-7788f22807af-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-vnd9g\" (UID: \"894bdfb3-b1c7-419e-8f5c-7788f22807af\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-vnd9g" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.021948 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.025996 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f50117dc-dbba-4bb9-9335-fc47f0b9ad48-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-fnsxn\" (UID: \"f50117dc-dbba-4bb9-9335-fc47f0b9ad48\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fnsxn" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.026177 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a00d0805-a28c-4412-a7c8-bc23c90e3bff-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-rr44v\" (UID: \"a00d0805-a28c-4412-a7c8-bc23c90e3bff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-rr44v" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.026294 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a67c3bfc-4a29-4bc3-b7ae-527064c5aeb9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-9lxhm\" (UID: \"a67c3bfc-4a29-4bc3-b7ae-527064c5aeb9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9lxhm" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.026618 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0b29ecdf-6004-475e-8bcb-5fffa678a02b-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-vptvb\" (UID: \"0b29ecdf-6004-475e-8bcb-5fffa678a02b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vptvb" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.027205 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4d8034f-233a-444e-aeba-825cdedbff57-serving-cert\") pod \"kube-apiserver-operator-575994946d-xhpfh\" (UID: \"a4d8034f-233a-444e-aeba-825cdedbff57\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xhpfh" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.027324 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/75da01f5-f3ad-49af-a574-6581b6a58ca2-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-bbh5g\" (UID: \"75da01f5-f3ad-49af-a574-6581b6a58ca2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-bbh5g" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.028257 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d59be405-9fc2-438f-aa97-c461c35a2f61-signing-key\") pod \"service-ca-74545575db-m6g6q\" (UID: \"d59be405-9fc2-438f-aa97-c461c35a2f61\") " pod="openshift-service-ca/service-ca-74545575db-m6g6q" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.028754 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c5c53b7a-ff7b-4dcc-8bf2-fe1a79939427-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-qgjdb\" (UID: \"c5c53b7a-ff7b-4dcc-8bf2-fe1a79939427\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-qgjdb" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.028890 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/43ec8aae-6bc0-438d-84c5-63ef04ca4db9-default-certificate\") pod \"router-default-68cf44c8b8-7hwmg\" (UID: \"43ec8aae-6bc0-438d-84c5-63ef04ca4db9\") " pod="openshift-ingress/router-default-68cf44c8b8-7hwmg" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.029150 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e8fd93d9-b829-40ad-8bf1-5e7d231b22fd-srv-cert\") pod \"olm-operator-5cdf44d969-bftlm\" (UID: \"e8fd93d9-b829-40ad-8bf1-5e7d231b22fd\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-bftlm" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.029376 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/fa5aa762-dad4-4341-abc7-294aaa80993e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-9jpg8\" (UID: \"fa5aa762-dad4-4341-abc7-294aaa80993e\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-9jpg8" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.031020 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2de0479-bb6f-4d4f-a2a3-4f145f4cc49d-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-v7bbp\" (UID: \"e2de0479-bb6f-4d4f-a2a3-4f145f4cc49d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-v7bbp" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.031887 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/75da01f5-f3ad-49af-a574-6581b6a58ca2-srv-cert\") pod \"catalog-operator-75ff9f647d-bbh5g\" (UID: \"75da01f5-f3ad-49af-a574-6581b6a58ca2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-bbh5g" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.032038 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb028dc6-bfe0-4ca9-8e81-4b2a9b954524-serving-cert\") pod \"route-controller-manager-776cdc94d6-9lrlj\" (UID: \"cb028dc6-bfe0-4ca9-8e81-4b2a9b954524\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9lrlj" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.032200 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/de306a6c-37e9-4adb-bd62-44825e0df8c1-webhook-cert\") pod \"packageserver-7d4fc7d867-94qbq\" (UID: \"de306a6c-37e9-4adb-bd62-44825e0df8c1\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-94qbq" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.032493 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/de384ce1-a016-4108-bb34-bf9475a09c66-secret-volume\") pod \"collect-profiles-29421525-2b9xz\" (UID: \"de384ce1-a016-4108-bb34-bf9475a09c66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421525-2b9xz" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.042427 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.049809 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eaeb35b9-8034-4a9d-9cd5-ca6d11970674-config-volume\") pod \"dns-default-v799x\" (UID: \"eaeb35b9-8034-4a9d-9cd5-ca6d11970674\") " pod="openshift-dns/dns-default-v799x" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.061968 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.080402 5107 request.go:752] "Waited before sending request" delay="1.766877s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-default-metrics-tls&limit=500&resourceVersion=0" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.082627 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.095150 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:05 crc kubenswrapper[5107]: E1209 14:58:05.096099 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:05.596076514 +0000 UTC m=+133.319781403 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.101047 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/eaeb35b9-8034-4a9d-9cd5-ca6d11970674-metrics-tls\") pod \"dns-default-v799x\" (UID: \"eaeb35b9-8034-4a9d-9cd5-ca6d11970674\") " pod="openshift-dns/dns-default-v799x" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.101125 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.121935 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.128357 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/561ee952-c55f-43a8-bf1a-9aa3d3f7aafa-cert\") pod \"ingress-canary-6scqm\" (UID: \"561ee952-c55f-43a8-bf1a-9aa3d3f7aafa\") " pod="openshift-ingress-canary/ingress-canary-6scqm" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.140980 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.161377 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.184797 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.188621 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" event={"ID":"834666aa-f503-44df-8377-77c8670167cd","Type":"ContainerStarted","Data":"c43abc904f2c7944a368c413fc6fc375e98890e9ab8863a5ee8e0083945441f4"} Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.188940 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.195103 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-9rw2f" event={"ID":"932af8ce-8f0e-42d8-8a9e-4f1464ca84aa","Type":"ContainerStarted","Data":"1a395d2c0a8da7616ffb14ff13a6253ceb55581c6a5c529e347daf56033c1ce6"} Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.195148 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4ca34d01-f0a4-4610-a3d9-5e26da82c790-certs\") pod \"machine-config-server-sxtbk\" (UID: \"4ca34d01-f0a4-4610-a3d9-5e26da82c790\") " pod="openshift-machine-config-operator/machine-config-server-sxtbk" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.197789 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:05 crc kubenswrapper[5107]: E1209 14:58:05.198258 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:05.698244834 +0000 UTC m=+133.421949723 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.201458 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-8gpg6" event={"ID":"9ac2110c-ba1a-407f-bcb0-032edc5584f5","Type":"ContainerStarted","Data":"12352035bc3f0f21b984d81198ce95c249ef5216f055d789d6af6eb2594ab0cd"} Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.206486 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.206920 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-2n527" event={"ID":"edf7ee28-9ed2-48ba-b01e-db21605dc6d8","Type":"ContainerStarted","Data":"561b6241aa2fde34348f2ca64d7684a6ddf3a3067d95618c3063fa45a249c02f"} Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.206987 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-2n527" event={"ID":"edf7ee28-9ed2-48ba-b01e-db21605dc6d8","Type":"ContainerStarted","Data":"6d7fca330e6eb3d593e24c885cb8ddf9edb5fe3afb7c2aaa1b6d4b5b83453f87"} Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.207003 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-2n527" event={"ID":"edf7ee28-9ed2-48ba-b01e-db21605dc6d8","Type":"ContainerStarted","Data":"d06a50f00031e1ac23582ff4a6bb16a52dd2bbf1ffe77a5831e6907e3975a82f"} Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.208985 5107 generic.go:358] "Generic (PLEG): container finished" podID="c68ff193-fd5f-4a85-8713-20ef57d86ab8" containerID="3ebcbfae72b4b47888cf42bfa977efd92b82529a685daf498c947fe1872f8f80" exitCode=0 Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.209231 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-xdkgf" event={"ID":"c68ff193-fd5f-4a85-8713-20ef57d86ab8","Type":"ContainerDied","Data":"3ebcbfae72b4b47888cf42bfa977efd92b82529a685daf498c947fe1872f8f80"} Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.217414 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-9927m" event={"ID":"83ca1c49-458a-4ada-acdd-b7364abbf491","Type":"ContainerStarted","Data":"5c23f56b4ce2ac1950b8e892bc7f58e5afcd8e3847f1db142a78aaf086a906ed"} Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.219554 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-bxvnm" event={"ID":"31deaf77-6b16-4eb8-9d1b-6f111794da2f","Type":"ContainerStarted","Data":"9c3405006fe7e8e5253dfd24dfdee883a508739d2386520e1baeea4833bf2166"} Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.222861 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.231718 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4ca34d01-f0a4-4610-a3d9-5e26da82c790-node-bootstrap-token\") pod \"machine-config-server-sxtbk\" (UID: \"4ca34d01-f0a4-4610-a3d9-5e26da82c790\") " pod="openshift-machine-config-operator/machine-config-server-sxtbk" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.240949 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-9927m" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.242274 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-bg27m" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.244677 5107 patch_prober.go:28] interesting pod/downloads-747b44746d-bg27m container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.244741 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-bg27m" podUID="b47c5069-df03-4bb4-9b81-2213e9d95183" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.283632 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-2vgtw" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.285448 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmkd7\" (UniqueName: \"kubernetes.io/projected/baa70a71-f986-4810-8d66-a6313df5d522-kube-api-access-fmkd7\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.300397 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:05 crc kubenswrapper[5107]: E1209 14:58:05.302314 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:05.802294065 +0000 UTC m=+133.525998954 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.308786 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t75qd\" (UniqueName: \"kubernetes.io/projected/26c5f65a-531c-40f2-a64f-95616e7abb9a-kube-api-access-t75qd\") pod \"openshift-controller-manager-operator-686468bdd5-vkr2j\" (UID: \"26c5f65a-531c-40f2-a64f-95616e7abb9a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-vkr2j" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.327872 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-vkr2j" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.328494 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hxm4\" (UniqueName: \"kubernetes.io/projected/5e817bc4-4ff6-435d-b70f-29459a1800fe-kube-api-access-5hxm4\") pod \"etcd-operator-69b85846b6-hv97n\" (UID: \"5e817bc4-4ff6-435d-b70f-29459a1800fe\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-hv97n" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.339890 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-hv97n" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.353127 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n84wh\" (UniqueName: \"kubernetes.io/projected/7bad4c91-a636-48b6-bd5a-dc4cae3d40ca-kube-api-access-n84wh\") pod \"console-operator-67c89758df-dmt84\" (UID: \"7bad4c91-a636-48b6-bd5a-dc4cae3d40ca\") " pod="openshift-console-operator/console-operator-67c89758df-dmt84" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.359493 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/baa70a71-f986-4810-8d66-a6313df5d522-bound-sa-token\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.384403 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f78zz\" (UniqueName: \"kubernetes.io/projected/8b5927af-067e-4ec9-9774-89f6357ce9f1-kube-api-access-f78zz\") pod \"migrator-866fcbc849-xbd5v\" (UID: \"8b5927af-067e-4ec9-9774-89f6357ce9f1\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-xbd5v" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.400470 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/894bdfb3-b1c7-419e-8f5c-7788f22807af-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-vnd9g\" (UID: \"894bdfb3-b1c7-419e-8f5c-7788f22807af\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-vnd9g" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.410039 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:05 crc kubenswrapper[5107]: E1209 14:58:05.411555 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:05.911537086 +0000 UTC m=+133.635242055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.424091 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2q6v\" (UniqueName: \"kubernetes.io/projected/d7249681-3c68-4e92-aa49-47edb51bfb04-kube-api-access-f2q6v\") pod \"machine-config-controller-f9cdd68f7-ws4g6\" (UID: \"d7249681-3c68-4e92-aa49-47edb51bfb04\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-ws4g6" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.449772 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a4d8034f-233a-444e-aeba-825cdedbff57-kube-api-access\") pod \"kube-apiserver-operator-575994946d-xhpfh\" (UID: \"a4d8034f-233a-444e-aeba-825cdedbff57\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xhpfh" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.465620 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.471201 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2de0479-bb6f-4d4f-a2a3-4f145f4cc49d-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-v7bbp\" (UID: \"e2de0479-bb6f-4d4f-a2a3-4f145f4cc49d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-v7bbp" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.502386 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhh5j\" (UniqueName: \"kubernetes.io/projected/cb028dc6-bfe0-4ca9-8e81-4b2a9b954524-kube-api-access-zhh5j\") pod \"route-controller-manager-776cdc94d6-9lrlj\" (UID: \"cb028dc6-bfe0-4ca9-8e81-4b2a9b954524\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9lrlj" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.508371 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hbrq\" (UniqueName: \"kubernetes.io/projected/fa5aa762-dad4-4341-abc7-294aaa80993e-kube-api-access-2hbrq\") pod \"control-plane-machine-set-operator-75ffdb6fcd-9jpg8\" (UID: \"fa5aa762-dad4-4341-abc7-294aaa80993e\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-9jpg8" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.511057 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:05 crc kubenswrapper[5107]: E1209 14:58:05.511575 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:06.011553937 +0000 UTC m=+133.735258826 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.512529 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-ws4g6" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.540569 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkqq6\" (UniqueName: \"kubernetes.io/projected/0b29ecdf-6004-475e-8bcb-5fffa678a02b-kube-api-access-zkqq6\") pod \"cni-sysctl-allowlist-ds-vptvb\" (UID: \"0b29ecdf-6004-475e-8bcb-5fffa678a02b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vptvb" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.551220 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee76cc25-84c6-48a3-874b-4d310ae15a1e-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-zjd8q\" (UID: \"ee76cc25-84c6-48a3-874b-4d310ae15a1e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-zjd8q" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.571653 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbqt4\" (UniqueName: \"kubernetes.io/projected/eaeb35b9-8034-4a9d-9cd5-ca6d11970674-kube-api-access-gbqt4\") pod \"dns-default-v799x\" (UID: \"eaeb35b9-8034-4a9d-9cd5-ca6d11970674\") " pod="openshift-dns/dns-default-v799x" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.574728 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-xbd5v" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.578215 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-9jpg8" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.586584 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsp5f\" (UniqueName: \"kubernetes.io/projected/c5c53b7a-ff7b-4dcc-8bf2-fe1a79939427-kube-api-access-bsp5f\") pod \"machine-config-operator-67c9d58cbb-qgjdb\" (UID: \"c5c53b7a-ff7b-4dcc-8bf2-fe1a79939427\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-qgjdb" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.608149 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnhhp\" (UniqueName: \"kubernetes.io/projected/a67c3bfc-4a29-4bc3-b7ae-527064c5aeb9-kube-api-access-hnhhp\") pod \"cluster-image-registry-operator-86c45576b9-9lxhm\" (UID: \"a67c3bfc-4a29-4bc3-b7ae-527064c5aeb9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9lxhm" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.624179 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-dmt84" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.625402 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:05 crc kubenswrapper[5107]: E1209 14:58:05.625739 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:06.125725082 +0000 UTC m=+133.849429961 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.644449 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j9cc\" (UniqueName: \"kubernetes.io/projected/947d55c1-7cdf-48de-b10a-e783956ebbd8-kube-api-access-7j9cc\") pod \"csi-hostpathplugin-g5bqc\" (UID: \"947d55c1-7cdf-48de-b10a-e783956ebbd8\") " pod="hostpath-provisioner/csi-hostpathplugin-g5bqc" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.651314 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-knlr9\" (UniqueName: \"kubernetes.io/projected/4ca34d01-f0a4-4610-a3d9-5e26da82c790-kube-api-access-knlr9\") pod \"machine-config-server-sxtbk\" (UID: \"4ca34d01-f0a4-4610-a3d9-5e26da82c790\") " pod="openshift-machine-config-operator/machine-config-server-sxtbk" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.664450 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9pn9\" (UniqueName: \"kubernetes.io/projected/43ec8aae-6bc0-438d-84c5-63ef04ca4db9-kube-api-access-b9pn9\") pod \"router-default-68cf44c8b8-7hwmg\" (UID: \"43ec8aae-6bc0-438d-84c5-63ef04ca4db9\") " pod="openshift-ingress/router-default-68cf44c8b8-7hwmg" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.684355 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dzgp\" (UniqueName: \"kubernetes.io/projected/f5135d4a-e7f3-4ba7-9758-ec80e3beb22e-kube-api-access-2dzgp\") pod \"service-ca-operator-5b9c976747-2kdw2\" (UID: \"f5135d4a-e7f3-4ba7-9758-ec80e3beb22e\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-2kdw2" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.692325 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-zjd8q" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.699268 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xhpfh" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.704027 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzqwg\" (UniqueName: \"kubernetes.io/projected/75da01f5-f3ad-49af-a574-6581b6a58ca2-kube-api-access-xzqwg\") pod \"catalog-operator-75ff9f647d-bbh5g\" (UID: \"75da01f5-f3ad-49af-a574-6581b6a58ca2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-bbh5g" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.705092 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-v7bbp" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.715097 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-2kdw2" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.718859 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj9nj\" (UniqueName: \"kubernetes.io/projected/de384ce1-a016-4108-bb34-bf9475a09c66-kube-api-access-qj9nj\") pod \"collect-profiles-29421525-2b9xz\" (UID: \"de384ce1-a016-4108-bb34-bf9475a09c66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421525-2b9xz" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.726820 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:05 crc kubenswrapper[5107]: E1209 14:58:05.727307 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:06.227285775 +0000 UTC m=+133.950990664 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.735258 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-g5bqc" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.740991 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzpvv\" (UniqueName: \"kubernetes.io/projected/e8fd93d9-b829-40ad-8bf1-5e7d231b22fd-kube-api-access-hzpvv\") pod \"olm-operator-5cdf44d969-bftlm\" (UID: \"e8fd93d9-b829-40ad-8bf1-5e7d231b22fd\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-bftlm" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.743146 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-vptvb" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.765482 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjk82\" (UniqueName: \"kubernetes.io/projected/a967d951-470b-486f-8037-73dcbdb3e171-kube-api-access-cjk82\") pod \"machine-api-operator-755bb95488-vx5zg\" (UID: \"a967d951-470b-486f-8037-73dcbdb3e171\") " pod="openshift-machine-api/machine-api-operator-755bb95488-vx5zg" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.782072 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-7hwmg" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.792228 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-v799x" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.805920 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9lrlj" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.807539 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-sxtbk" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.822703 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-qgjdb" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.830450 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:05 crc kubenswrapper[5107]: E1209 14:58:05.830850 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:06.330837863 +0000 UTC m=+134.054542752 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.838721 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a67c3bfc-4a29-4bc3-b7ae-527064c5aeb9-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-9lxhm\" (UID: \"a67c3bfc-4a29-4bc3-b7ae-527064c5aeb9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9lxhm" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.839328 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrwsh\" (UniqueName: \"kubernetes.io/projected/d59be405-9fc2-438f-aa97-c461c35a2f61-kube-api-access-qrwsh\") pod \"service-ca-74545575db-m6g6q\" (UID: \"d59be405-9fc2-438f-aa97-c461c35a2f61\") " pod="openshift-service-ca/service-ca-74545575db-m6g6q" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.858091 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jglhd\" (UniqueName: \"kubernetes.io/projected/de306a6c-37e9-4adb-bd62-44825e0df8c1-kube-api-access-jglhd\") pod \"packageserver-7d4fc7d867-94qbq\" (UID: \"de306a6c-37e9-4adb-bd62-44825e0df8c1\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-94qbq" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.858449 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-bftlm" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.876010 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wbbg\" (UniqueName: \"kubernetes.io/projected/804c8bba-5b68-4a3e-8060-e209d86f3d38-kube-api-access-7wbbg\") pod \"multus-admission-controller-69db94689b-8hbcw\" (UID: \"804c8bba-5b68-4a3e-8060-e209d86f3d38\") " pod="openshift-multus/multus-admission-controller-69db94689b-8hbcw" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.878240 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g8td\" (UniqueName: \"kubernetes.io/projected/561ee952-c55f-43a8-bf1a-9aa3d3f7aafa-kube-api-access-6g8td\") pod \"ingress-canary-6scqm\" (UID: \"561ee952-c55f-43a8-bf1a-9aa3d3f7aafa\") " pod="openshift-ingress-canary/ingress-canary-6scqm" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.889659 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-bbh5g" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.931879 5107 ???:1] "http: TLS handshake error from 192.168.126.11:37136: no serving certificate available for the kubelet" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.932537 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:05 crc kubenswrapper[5107]: E1209 14:58:05.932995 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:06.432973632 +0000 UTC m=+134.156678531 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.945511 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sgtc\" (UniqueName: \"kubernetes.io/projected/f50117dc-dbba-4bb9-9335-fc47f0b9ad48-kube-api-access-6sgtc\") pod \"marketplace-operator-547dbd544d-fnsxn\" (UID: \"f50117dc-dbba-4bb9-9335-fc47f0b9ad48\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-fnsxn" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.958114 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7vck\" (UniqueName: \"kubernetes.io/projected/894bdfb3-b1c7-419e-8f5c-7788f22807af-kube-api-access-j7vck\") pod \"ingress-operator-6b9cb4dbcf-vnd9g\" (UID: \"894bdfb3-b1c7-419e-8f5c-7788f22807af\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-vnd9g" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.958456 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9lxhm" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.974047 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-94qbq" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.974175 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ng7r\" (UniqueName: \"kubernetes.io/projected/ecdcb3b6-9cf9-4930-b1f7-8cfcc94fa150-kube-api-access-9ng7r\") pod \"package-server-manager-77f986bd66-22x2p\" (UID: \"ecdcb3b6-9cf9-4930-b1f7-8cfcc94fa150\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-22x2p" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.982561 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-vx5zg" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.984163 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zlkv\" (UniqueName: \"kubernetes.io/projected/a00d0805-a28c-4412-a7c8-bc23c90e3bff-kube-api-access-9zlkv\") pod \"kube-storage-version-migrator-operator-565b79b866-rr44v\" (UID: \"a00d0805-a28c-4412-a7c8-bc23c90e3bff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-rr44v" Dec 09 14:58:05 crc kubenswrapper[5107]: I1209 14:58:05.993966 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421525-2b9xz" Dec 09 14:58:06 crc kubenswrapper[5107]: I1209 14:58:06.007999 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-m6g6q" Dec 09 14:58:06 crc kubenswrapper[5107]: I1209 14:58:06.043846 5107 ???:1] "http: TLS handshake error from 192.168.126.11:37148: no serving certificate available for the kubelet" Dec 09 14:58:06 crc kubenswrapper[5107]: I1209 14:58:06.047350 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-vnd9g" Dec 09 14:58:06 crc kubenswrapper[5107]: I1209 14:58:06.048248 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:06 crc kubenswrapper[5107]: E1209 14:58:06.048639 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:06.548627097 +0000 UTC m=+134.272331986 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:06 crc kubenswrapper[5107]: I1209 14:58:06.059282 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-hv97n"] Dec 09 14:58:06 crc kubenswrapper[5107]: I1209 14:58:06.061767 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-6scqm" Dec 09 14:58:06 crc kubenswrapper[5107]: I1209 14:58:06.082885 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-vkr2j"] Dec 09 14:58:06 crc kubenswrapper[5107]: I1209 14:58:06.130740 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-rr44v" Dec 09 14:58:06 crc kubenswrapper[5107]: I1209 14:58:06.139210 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-8hbcw" Dec 09 14:58:06 crc kubenswrapper[5107]: I1209 14:58:06.145922 5107 ???:1] "http: TLS handshake error from 192.168.126.11:37164: no serving certificate available for the kubelet" Dec 09 14:58:06 crc kubenswrapper[5107]: I1209 14:58:06.149127 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:06 crc kubenswrapper[5107]: E1209 14:58:06.149668 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:06.649647666 +0000 UTC m=+134.373352555 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:06 crc kubenswrapper[5107]: I1209 14:58:06.171533 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-ws4g6"] Dec 09 14:58:06 crc kubenswrapper[5107]: W1209 14:58:06.179720 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ca34d01_f0a4_4610_a3d9_5e26da82c790.slice/crio-81702c1f43145faedf24590776add75c4d886217396246971dff30f9706763e6 WatchSource:0}: Error finding container 81702c1f43145faedf24590776add75c4d886217396246971dff30f9706763e6: Status 404 returned error can't find the container with id 81702c1f43145faedf24590776add75c4d886217396246971dff30f9706763e6 Dec 09 14:58:06 crc kubenswrapper[5107]: I1209 14:58:06.225430 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-ws4g6" event={"ID":"d7249681-3c68-4e92-aa49-47edb51bfb04","Type":"ContainerStarted","Data":"ed53cfedd4fc831c4bfcdd890569c7bfbea294363ce6aa82189a80304d4795c8"} Dec 09 14:58:06 crc kubenswrapper[5107]: I1209 14:58:06.227150 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-hv97n" event={"ID":"5e817bc4-4ff6-435d-b70f-29459a1800fe","Type":"ContainerStarted","Data":"6f7ee8e175413c7ac2cbc4b58f500818c093614cda08eb8752cfe17d9eb8c3e9"} Dec 09 14:58:06 crc kubenswrapper[5107]: W1209 14:58:06.229186 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26c5f65a_531c_40f2_a64f_95616e7abb9a.slice/crio-1f8900b7ee33a001d8b1db268a5754c8a29dcef49fd84afc245bbb235713412d WatchSource:0}: Error finding container 1f8900b7ee33a001d8b1db268a5754c8a29dcef49fd84afc245bbb235713412d: Status 404 returned error can't find the container with id 1f8900b7ee33a001d8b1db268a5754c8a29dcef49fd84afc245bbb235713412d Dec 09 14:58:06 crc kubenswrapper[5107]: I1209 14:58:06.229691 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-7hwmg" event={"ID":"43ec8aae-6bc0-438d-84c5-63ef04ca4db9","Type":"ContainerStarted","Data":"1c8cf47f0f66a0adae25bbde71d90617f786cdc9be5f0befe0d81a35c44addc8"} Dec 09 14:58:06 crc kubenswrapper[5107]: I1209 14:58:06.233875 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-fnsxn" Dec 09 14:58:06 crc kubenswrapper[5107]: I1209 14:58:06.246401 5107 ???:1] "http: TLS handshake error from 192.168.126.11:37180: no serving certificate available for the kubelet" Dec 09 14:58:06 crc kubenswrapper[5107]: I1209 14:58:06.255101 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:06 crc kubenswrapper[5107]: E1209 14:58:06.255704 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:06.755686931 +0000 UTC m=+134.479391820 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:06 crc kubenswrapper[5107]: I1209 14:58:06.256189 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-xdkgf" event={"ID":"c68ff193-fd5f-4a85-8713-20ef57d86ab8","Type":"ContainerStarted","Data":"36a3def85d5560564121a3c9858a6bf6e6d04809e44337606daa4740bf9a118a"} Dec 09 14:58:06 crc kubenswrapper[5107]: I1209 14:58:06.256295 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-22x2p" Dec 09 14:58:06 crc kubenswrapper[5107]: I1209 14:58:06.280850 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-sxtbk" event={"ID":"4ca34d01-f0a4-4610-a3d9-5e26da82c790","Type":"ContainerStarted","Data":"81702c1f43145faedf24590776add75c4d886217396246971dff30f9706763e6"} Dec 09 14:58:06 crc kubenswrapper[5107]: I1209 14:58:06.303551 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-vptvb" event={"ID":"0b29ecdf-6004-475e-8bcb-5fffa678a02b","Type":"ContainerStarted","Data":"866ebdaf335380dbc828fc419f1063a54f63408d6eaed597a71f4f3b5c63dc29"} Dec 09 14:58:06 crc kubenswrapper[5107]: I1209 14:58:06.320959 5107 patch_prober.go:28] interesting pod/downloads-747b44746d-bg27m container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 09 14:58:06 crc kubenswrapper[5107]: I1209 14:58:06.321011 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-bg27m" podUID="b47c5069-df03-4bb4-9b81-2213e9d95183" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 09 14:58:06 crc kubenswrapper[5107]: I1209 14:58:06.350188 5107 ???:1] "http: TLS handshake error from 192.168.126.11:37188: no serving certificate available for the kubelet" Dec 09 14:58:06 crc kubenswrapper[5107]: I1209 14:58:06.375625 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:06 crc kubenswrapper[5107]: E1209 14:58:06.375988 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:06.875958041 +0000 UTC m=+134.599662930 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:06 crc kubenswrapper[5107]: I1209 14:58:06.376715 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:06 crc kubenswrapper[5107]: E1209 14:58:06.389559 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:06.889527997 +0000 UTC m=+134.613232886 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:06 crc kubenswrapper[5107]: I1209 14:58:06.431384 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-bg27m" podStartSLOduration=113.415290603 podStartE2EDuration="1m53.415290603s" podCreationTimestamp="2025-12-09 14:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:06.376034743 +0000 UTC m=+134.099739632" watchObservedRunningTime="2025-12-09 14:58:06.415290603 +0000 UTC m=+134.138995492" Dec 09 14:58:06 crc kubenswrapper[5107]: I1209 14:58:06.455178 5107 ???:1] "http: TLS handshake error from 192.168.126.11:37198: no serving certificate available for the kubelet" Dec 09 14:58:06 crc kubenswrapper[5107]: I1209 14:58:06.473024 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-dmt84"] Dec 09 14:58:06 crc kubenswrapper[5107]: I1209 14:58:06.483950 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:06 crc kubenswrapper[5107]: E1209 14:58:06.484431 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:06.98440655 +0000 UTC m=+134.708111439 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:06 crc kubenswrapper[5107]: I1209 14:58:06.503381 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-xbd5v"] Dec 09 14:58:06 crc kubenswrapper[5107]: I1209 14:58:06.550123 5107 ???:1] "http: TLS handshake error from 192.168.126.11:37204: no serving certificate available for the kubelet" Dec 09 14:58:06 crc kubenswrapper[5107]: I1209 14:58:06.586045 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:06 crc kubenswrapper[5107]: E1209 14:58:06.587241 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:07.087215498 +0000 UTC m=+134.810920467 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:06 crc kubenswrapper[5107]: I1209 14:58:06.599104 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-9rw2f" podStartSLOduration=114.599077068 podStartE2EDuration="1m54.599077068s" podCreationTimestamp="2025-12-09 14:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:06.596621112 +0000 UTC m=+134.320326011" watchObservedRunningTime="2025-12-09 14:58:06.599077068 +0000 UTC m=+134.322781957" Dec 09 14:58:06 crc kubenswrapper[5107]: I1209 14:58:06.649064 5107 ???:1] "http: TLS handshake error from 192.168.126.11:37214: no serving certificate available for the kubelet" Dec 09 14:58:06 crc kubenswrapper[5107]: I1209 14:58:06.673829 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-fttvh" podStartSLOduration=114.673808928 podStartE2EDuration="1m54.673808928s" podCreationTimestamp="2025-12-09 14:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:06.673061597 +0000 UTC m=+134.396766486" watchObservedRunningTime="2025-12-09 14:58:06.673808928 +0000 UTC m=+134.397513817" Dec 09 14:58:06 crc kubenswrapper[5107]: I1209 14:58:06.690315 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:06 crc kubenswrapper[5107]: E1209 14:58:06.690542 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:07.190498618 +0000 UTC m=+134.914203507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:06 crc kubenswrapper[5107]: I1209 14:58:06.691382 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:06 crc kubenswrapper[5107]: E1209 14:58:06.691827 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:07.191816923 +0000 UTC m=+134.915521812 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:06 crc kubenswrapper[5107]: W1209 14:58:06.711620 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7bad4c91_a636_48b6_bd5a_dc4cae3d40ca.slice/crio-fb93c7a2060de78d2a633a1ce1acf8cca6af1d610b680fa76c2cd03f1f1dec0f WatchSource:0}: Error finding container fb93c7a2060de78d2a633a1ce1acf8cca6af1d610b680fa76c2cd03f1f1dec0f: Status 404 returned error can't find the container with id fb93c7a2060de78d2a633a1ce1acf8cca6af1d610b680fa76c2cd03f1f1dec0f Dec 09 14:58:06 crc kubenswrapper[5107]: I1209 14:58:06.793226 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:06 crc kubenswrapper[5107]: E1209 14:58:06.793593 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:07.293568233 +0000 UTC m=+135.017273122 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:06 crc kubenswrapper[5107]: I1209 14:58:06.864917 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-8gpg6" podStartSLOduration=115.864881409 podStartE2EDuration="1m55.864881409s" podCreationTimestamp="2025-12-09 14:56:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:06.841678792 +0000 UTC m=+134.565383681" watchObservedRunningTime="2025-12-09 14:58:06.864881409 +0000 UTC m=+134.588586298" Dec 09 14:58:06 crc kubenswrapper[5107]: I1209 14:58:06.872709 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xhpfh"] Dec 09 14:58:06 crc kubenswrapper[5107]: I1209 14:58:06.895423 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:06 crc kubenswrapper[5107]: E1209 14:58:06.895853 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:07.395836195 +0000 UTC m=+135.119541084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:06 crc kubenswrapper[5107]: I1209 14:58:06.906725 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-9jpg8"] Dec 09 14:58:06 crc kubenswrapper[5107]: I1209 14:58:06.981813 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-v7bbp"] Dec 09 14:58:07 crc kubenswrapper[5107]: I1209 14:58:07.001454 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:07 crc kubenswrapper[5107]: E1209 14:58:07.001716 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:07.501668385 +0000 UTC m=+135.225373274 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:07 crc kubenswrapper[5107]: I1209 14:58:07.079921 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-9927m" podStartSLOduration=114.079904989 podStartE2EDuration="1m54.079904989s" podCreationTimestamp="2025-12-09 14:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:07.079208069 +0000 UTC m=+134.802912958" watchObservedRunningTime="2025-12-09 14:58:07.079904989 +0000 UTC m=+134.803609878" Dec 09 14:58:07 crc kubenswrapper[5107]: I1209 14:58:07.104345 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:07 crc kubenswrapper[5107]: E1209 14:58:07.105019 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:07.604994076 +0000 UTC m=+135.328698965 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:07 crc kubenswrapper[5107]: I1209 14:58:07.157162 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-cttpw" podStartSLOduration=114.157136175 podStartE2EDuration="1m54.157136175s" podCreationTimestamp="2025-12-09 14:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:07.116273001 +0000 UTC m=+134.839977910" watchObservedRunningTime="2025-12-09 14:58:07.157136175 +0000 UTC m=+134.880841064" Dec 09 14:58:07 crc kubenswrapper[5107]: I1209 14:58:07.205406 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:07 crc kubenswrapper[5107]: E1209 14:58:07.205579 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:07.705549553 +0000 UTC m=+135.429254452 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:07 crc kubenswrapper[5107]: I1209 14:58:07.206070 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:07 crc kubenswrapper[5107]: E1209 14:58:07.208257 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:07.708240175 +0000 UTC m=+135.431945064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:07 crc kubenswrapper[5107]: W1209 14:58:07.246420 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2de0479_bb6f_4d4f_a2a3_4f145f4cc49d.slice/crio-da3327097c9b28c21e414b5942c221786e932695a539100c9a1196fbb1bb77f6 WatchSource:0}: Error finding container da3327097c9b28c21e414b5942c221786e932695a539100c9a1196fbb1bb77f6: Status 404 returned error can't find the container with id da3327097c9b28c21e414b5942c221786e932695a539100c9a1196fbb1bb77f6 Dec 09 14:58:07 crc kubenswrapper[5107]: I1209 14:58:07.314279 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:07 crc kubenswrapper[5107]: E1209 14:58:07.315373 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:07.815303028 +0000 UTC m=+135.539007917 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:07 crc kubenswrapper[5107]: I1209 14:58:07.322084 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xhpfh" event={"ID":"a4d8034f-233a-444e-aeba-825cdedbff57","Type":"ContainerStarted","Data":"727bc582d108bcd8112f63ba05042cf15536369e7e6d35277fbdac2b0429b9a9"} Dec 09 14:58:07 crc kubenswrapper[5107]: I1209 14:58:07.323003 5107 ???:1] "http: TLS handshake error from 192.168.126.11:37226: no serving certificate available for the kubelet" Dec 09 14:58:07 crc kubenswrapper[5107]: I1209 14:58:07.336256 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-xbd5v" event={"ID":"8b5927af-067e-4ec9-9774-89f6357ce9f1","Type":"ContainerStarted","Data":"2a8cddce00fd4ab89e90b7484332d06434cf70ee86802fc0c9c13d08ddbd881a"} Dec 09 14:58:07 crc kubenswrapper[5107]: I1209 14:58:07.362206 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-2n527" podStartSLOduration=114.362172754 podStartE2EDuration="1m54.362172754s" podCreationTimestamp="2025-12-09 14:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:07.353685115 +0000 UTC m=+135.077390014" watchObservedRunningTime="2025-12-09 14:58:07.362172754 +0000 UTC m=+135.085877643" Dec 09 14:58:07 crc kubenswrapper[5107]: I1209 14:58:07.365768 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-fn2cf" podStartSLOduration=115.365743851 podStartE2EDuration="1m55.365743851s" podCreationTimestamp="2025-12-09 14:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:07.313957322 +0000 UTC m=+135.037662211" watchObservedRunningTime="2025-12-09 14:58:07.365743851 +0000 UTC m=+135.089448740" Dec 09 14:58:07 crc kubenswrapper[5107]: I1209 14:58:07.379217 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-7hwmg" event={"ID":"43ec8aae-6bc0-438d-84c5-63ef04ca4db9","Type":"ContainerStarted","Data":"45a37ffa876b79cff54b2730601fd8fa9d8a6c3d9b3409648a4e5da320f67524"} Dec 09 14:58:07 crc kubenswrapper[5107]: I1209 14:58:07.395515 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-dmt84" event={"ID":"7bad4c91-a636-48b6-bd5a-dc4cae3d40ca","Type":"ContainerStarted","Data":"fb93c7a2060de78d2a633a1ce1acf8cca6af1d610b680fa76c2cd03f1f1dec0f"} Dec 09 14:58:07 crc kubenswrapper[5107]: I1209 14:58:07.441796 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:07 crc kubenswrapper[5107]: E1209 14:58:07.442555 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:07.942536556 +0000 UTC m=+135.666241445 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:07 crc kubenswrapper[5107]: I1209 14:58:07.445923 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-vkr2j" event={"ID":"26c5f65a-531c-40f2-a64f-95616e7abb9a","Type":"ContainerStarted","Data":"30c3425168edf8a43116579c32b4480dce2d15fee03e5f10d1aa631510693580"} Dec 09 14:58:07 crc kubenswrapper[5107]: I1209 14:58:07.445991 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-vkr2j" event={"ID":"26c5f65a-531c-40f2-a64f-95616e7abb9a","Type":"ContainerStarted","Data":"1f8900b7ee33a001d8b1db268a5754c8a29dcef49fd84afc245bbb235713412d"} Dec 09 14:58:07 crc kubenswrapper[5107]: I1209 14:58:07.453932 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-sxtbk" event={"ID":"4ca34d01-f0a4-4610-a3d9-5e26da82c790","Type":"ContainerStarted","Data":"0b722201650d710edb42d58ea2d79eb429cc35354fef4b0bf36185d33d95f9c6"} Dec 09 14:58:07 crc kubenswrapper[5107]: I1209 14:58:07.468658 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-v7bbp" event={"ID":"e2de0479-bb6f-4d4f-a2a3-4f145f4cc49d","Type":"ContainerStarted","Data":"da3327097c9b28c21e414b5942c221786e932695a539100c9a1196fbb1bb77f6"} Dec 09 14:58:07 crc kubenswrapper[5107]: I1209 14:58:07.494426 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-ws4g6" event={"ID":"d7249681-3c68-4e92-aa49-47edb51bfb04","Type":"ContainerStarted","Data":"586c3a52dc96c406eb8f1de4d9fa45e163bc35a1d591bcfca8bde3599083781e"} Dec 09 14:58:07 crc kubenswrapper[5107]: I1209 14:58:07.504173 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-9927m" Dec 09 14:58:07 crc kubenswrapper[5107]: I1209 14:58:07.543583 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:07 crc kubenswrapper[5107]: E1209 14:58:07.547413 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:08.047379827 +0000 UTC m=+135.771084826 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:07 crc kubenswrapper[5107]: I1209 14:58:07.628614 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-2vgtw" podStartSLOduration=114.628586641 podStartE2EDuration="1m54.628586641s" podCreationTimestamp="2025-12-09 14:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:07.616136526 +0000 UTC m=+135.339841435" watchObservedRunningTime="2025-12-09 14:58:07.628586641 +0000 UTC m=+135.352291530" Dec 09 14:58:07 crc kubenswrapper[5107]: I1209 14:58:07.647665 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:07 crc kubenswrapper[5107]: E1209 14:58:07.648042 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:08.148023326 +0000 UTC m=+135.871728215 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:07 crc kubenswrapper[5107]: I1209 14:58:07.669934 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-v799x"] Dec 09 14:58:07 crc kubenswrapper[5107]: I1209 14:58:07.708012 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-zjd8q"] Dec 09 14:58:07 crc kubenswrapper[5107]: I1209 14:58:07.735292 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" podStartSLOduration=115.735268054 podStartE2EDuration="1m55.735268054s" podCreationTimestamp="2025-12-09 14:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:07.730976548 +0000 UTC m=+135.454681437" watchObservedRunningTime="2025-12-09 14:58:07.735268054 +0000 UTC m=+135.458972943" Dec 09 14:58:07 crc kubenswrapper[5107]: I1209 14:58:07.749691 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:07 crc kubenswrapper[5107]: E1209 14:58:07.750006 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:08.249983051 +0000 UTC m=+135.973687940 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:07 crc kubenswrapper[5107]: I1209 14:58:07.750463 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-94qbq"] Dec 09 14:58:07 crc kubenswrapper[5107]: I1209 14:58:07.772512 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-2kdw2"] Dec 09 14:58:07 crc kubenswrapper[5107]: I1209 14:58:07.785190 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-7hwmg" Dec 09 14:58:07 crc kubenswrapper[5107]: I1209 14:58:07.785920 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-7hwmg container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Dec 09 14:58:07 crc kubenswrapper[5107]: I1209 14:58:07.786018 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-7hwmg" podUID="43ec8aae-6bc0-438d-84c5-63ef04ca4db9" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Dec 09 14:58:07 crc kubenswrapper[5107]: I1209 14:58:07.789514 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-g5bqc"] Dec 09 14:58:07 crc kubenswrapper[5107]: I1209 14:58:07.812165 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-bftlm"] Dec 09 14:58:07 crc kubenswrapper[5107]: I1209 14:58:07.828294 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-bbh5g"] Dec 09 14:58:07 crc kubenswrapper[5107]: I1209 14:58:07.851166 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:07 crc kubenswrapper[5107]: E1209 14:58:07.852419 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:08.352395398 +0000 UTC m=+136.076100287 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:07 crc kubenswrapper[5107]: I1209 14:58:07.884902 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-qgjdb"] Dec 09 14:58:07 crc kubenswrapper[5107]: I1209 14:58:07.887908 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-m6g6q"] Dec 09 14:58:07 crc kubenswrapper[5107]: W1209 14:58:07.890925 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5135d4a_e7f3_4ba7_9758_ec80e3beb22e.slice/crio-c4f79180949cb8fa7b719666f9b83e0fbc00b993912aae8dd4b6f9929e16aa4c WatchSource:0}: Error finding container c4f79180949cb8fa7b719666f9b83e0fbc00b993912aae8dd4b6f9929e16aa4c: Status 404 returned error can't find the container with id c4f79180949cb8fa7b719666f9b83e0fbc00b993912aae8dd4b6f9929e16aa4c Dec 09 14:58:07 crc kubenswrapper[5107]: I1209 14:58:07.894139 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-9lxhm"] Dec 09 14:58:07 crc kubenswrapper[5107]: W1209 14:58:07.897773 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod947d55c1_7cdf_48de_b10a_e783956ebbd8.slice/crio-54dba68256d2abc6a3c7907499f291ef33598205e9b7b180348125b7054b8a90 WatchSource:0}: Error finding container 54dba68256d2abc6a3c7907499f291ef33598205e9b7b180348125b7054b8a90: Status 404 returned error can't find the container with id 54dba68256d2abc6a3c7907499f291ef33598205e9b7b180348125b7054b8a90 Dec 09 14:58:07 crc kubenswrapper[5107]: I1209 14:58:07.905977 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-22x2p"] Dec 09 14:58:07 crc kubenswrapper[5107]: I1209 14:58:07.955839 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:07 crc kubenswrapper[5107]: E1209 14:58:07.956073 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:08.456031448 +0000 UTC m=+136.179736337 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:07 crc kubenswrapper[5107]: I1209 14:58:07.956309 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:07 crc kubenswrapper[5107]: E1209 14:58:07.956826 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:08.456817539 +0000 UTC m=+136.180522428 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:07 crc kubenswrapper[5107]: I1209 14:58:07.959012 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-bxvnm" Dec 09 14:58:07 crc kubenswrapper[5107]: I1209 14:58:07.960670 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-bxvnm" Dec 09 14:58:07 crc kubenswrapper[5107]: I1209 14:58:07.978143 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-bxvnm" podStartSLOduration=114.978115095 podStartE2EDuration="1m54.978115095s" podCreationTimestamp="2025-12-09 14:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:07.971368912 +0000 UTC m=+135.695073801" watchObservedRunningTime="2025-12-09 14:58:07.978115095 +0000 UTC m=+135.701819994" Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.001256 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-bxvnm" Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.031316 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-sxtbk" podStartSLOduration=6.031290511 podStartE2EDuration="6.031290511s" podCreationTimestamp="2025-12-09 14:58:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:08.028615899 +0000 UTC m=+135.752320788" watchObservedRunningTime="2025-12-09 14:58:08.031290511 +0000 UTC m=+135.754995400" Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.065248 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-9lrlj"] Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.071159 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:08 crc kubenswrapper[5107]: E1209 14:58:08.071689 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:08.571661532 +0000 UTC m=+136.295366421 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.074245 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:08 crc kubenswrapper[5107]: E1209 14:58:08.076517 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:08.576503182 +0000 UTC m=+136.300208071 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:08 crc kubenswrapper[5107]: W1209 14:58:08.108492 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb028dc6_bfe0_4ca9_8e81_4b2a9b954524.slice/crio-56c05abc256d3c26e533deb782c0c61210b4e1c1cc2f4e250c6059a7db01d309 WatchSource:0}: Error finding container 56c05abc256d3c26e533deb782c0c61210b4e1c1cc2f4e250c6059a7db01d309: Status 404 returned error can't find the container with id 56c05abc256d3c26e533deb782c0c61210b4e1c1cc2f4e250c6059a7db01d309 Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.110231 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-vkr2j" podStartSLOduration=115.110208063 podStartE2EDuration="1m55.110208063s" podCreationTimestamp="2025-12-09 14:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:08.09009252 +0000 UTC m=+135.813797409" watchObservedRunningTime="2025-12-09 14:58:08.110208063 +0000 UTC m=+135.833912952" Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.111827 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-vx5zg"] Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.112751 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-6scqm"] Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.131942 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-7hwmg" podStartSLOduration=115.13192313 podStartE2EDuration="1m55.13192313s" podCreationTimestamp="2025-12-09 14:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:08.119256128 +0000 UTC m=+135.842961017" watchObservedRunningTime="2025-12-09 14:58:08.13192313 +0000 UTC m=+135.855628019" Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.143455 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-rr44v"] Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.163683 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421525-2b9xz"] Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.176468 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:08 crc kubenswrapper[5107]: E1209 14:58:08.177441 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:08.677415519 +0000 UTC m=+136.401120408 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.217917 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-8hbcw"] Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.221617 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-fnsxn"] Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.228243 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-vnd9g"] Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.280972 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:08 crc kubenswrapper[5107]: E1209 14:58:08.281443 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:08.781426549 +0000 UTC m=+136.505131438 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:08 crc kubenswrapper[5107]: W1209 14:58:08.288633 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod804c8bba_5b68_4a3e_8060_e209d86f3d38.slice/crio-37fb553ce88ae71f822bd4fb71571269164ca1e8d3521967588cb6bd35252f0c WatchSource:0}: Error finding container 37fb553ce88ae71f822bd4fb71571269164ca1e8d3521967588cb6bd35252f0c: Status 404 returned error can't find the container with id 37fb553ce88ae71f822bd4fb71571269164ca1e8d3521967588cb6bd35252f0c Dec 09 14:58:08 crc kubenswrapper[5107]: W1209 14:58:08.303169 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf50117dc_dbba_4bb9_9335_fc47f0b9ad48.slice/crio-950bb64ae6590b6586cdf816404cda4ab1b25a3d8a51d34f2f03caa6360455ae WatchSource:0}: Error finding container 950bb64ae6590b6586cdf816404cda4ab1b25a3d8a51d34f2f03caa6360455ae: Status 404 returned error can't find the container with id 950bb64ae6590b6586cdf816404cda4ab1b25a3d8a51d34f2f03caa6360455ae Dec 09 14:58:08 crc kubenswrapper[5107]: W1209 14:58:08.307902 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod894bdfb3_b1c7_419e_8f5c_7788f22807af.slice/crio-d18a931d561c87368c8e19b7e210574ec1bbba91c84c0e3d3a327d6822a5dba2 WatchSource:0}: Error finding container d18a931d561c87368c8e19b7e210574ec1bbba91c84c0e3d3a327d6822a5dba2: Status 404 returned error can't find the container with id d18a931d561c87368c8e19b7e210574ec1bbba91c84c0e3d3a327d6822a5dba2 Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.381853 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:08 crc kubenswrapper[5107]: E1209 14:58:08.382236 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:08.882215962 +0000 UTC m=+136.605920851 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.484218 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:08 crc kubenswrapper[5107]: E1209 14:58:08.484754 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:08.984737922 +0000 UTC m=+136.708442811 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.544847 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-v799x" event={"ID":"eaeb35b9-8034-4a9d-9cd5-ca6d11970674","Type":"ContainerStarted","Data":"996f52b41cc18f02bb0f4e04ba2549ab657944ba3f6f62eaef3c8bbbe8844236"} Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.587965 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-g5bqc" event={"ID":"947d55c1-7cdf-48de-b10a-e783956ebbd8","Type":"ContainerStarted","Data":"54dba68256d2abc6a3c7907499f291ef33598205e9b7b180348125b7054b8a90"} Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.588530 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:08 crc kubenswrapper[5107]: E1209 14:58:08.588899 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:09.088882305 +0000 UTC m=+136.812587194 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.632239 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-ws4g6" event={"ID":"d7249681-3c68-4e92-aa49-47edb51bfb04","Type":"ContainerStarted","Data":"fe57da5f4aeca0608c89d505d819ba1dfbf1ad3b61c57b9e5c762e1149500553"} Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.662009 5107 ???:1] "http: TLS handshake error from 192.168.126.11:37242: no serving certificate available for the kubelet" Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.694368 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:08 crc kubenswrapper[5107]: E1209 14:58:08.694878 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:09.194861178 +0000 UTC m=+136.918566067 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.695626 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-hv97n" event={"ID":"5e817bc4-4ff6-435d-b70f-29459a1800fe","Type":"ContainerStarted","Data":"a186008c56335a4b40233869d506b7b257f444b9368038fdeb01f72426871cd5"} Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.726502 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-ws4g6" podStartSLOduration=115.726476953 podStartE2EDuration="1m55.726476953s" podCreationTimestamp="2025-12-09 14:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:08.673967844 +0000 UTC m=+136.397672743" watchObservedRunningTime="2025-12-09 14:58:08.726476953 +0000 UTC m=+136.450181832" Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.727085 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-hv97n" podStartSLOduration=115.727079339 podStartE2EDuration="1m55.727079339s" podCreationTimestamp="2025-12-09 14:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:08.725847125 +0000 UTC m=+136.449552014" watchObservedRunningTime="2025-12-09 14:58:08.727079339 +0000 UTC m=+136.450784228" Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.729484 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xhpfh" event={"ID":"a4d8034f-233a-444e-aeba-825cdedbff57","Type":"ContainerStarted","Data":"358e80dbe208d7919b9e3a455eca4bae6edbe0e929d63924831a15db71264673"} Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.776162 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-dmt84" event={"ID":"7bad4c91-a636-48b6-bd5a-dc4cae3d40ca","Type":"ContainerStarted","Data":"e8723b8ec190ef0c9f30fb28f23e4e348e6f5775ad872b970c1dbb87b3186888"} Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.777705 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-dmt84" Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.795622 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:08 crc kubenswrapper[5107]: E1209 14:58:08.797521 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:09.2974801 +0000 UTC m=+137.021184999 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.798789 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-7hwmg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:58:08 crc kubenswrapper[5107]: [-]has-synced failed: reason withheld Dec 09 14:58:08 crc kubenswrapper[5107]: [+]process-running ok Dec 09 14:58:08 crc kubenswrapper[5107]: healthz check failed Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.798852 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-7hwmg" podUID="43ec8aae-6bc0-438d-84c5-63ef04ca4db9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.805975 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-vx5zg" event={"ID":"a967d951-470b-486f-8037-73dcbdb3e171","Type":"ContainerStarted","Data":"4499d8c5811d9760f8d3b05335c73945d5523d1c785ed0714857d5c3e39b453b"} Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.817411 5107 patch_prober.go:28] interesting pod/console-operator-67c89758df-dmt84 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.817457 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-dmt84" podUID="7bad4c91-a636-48b6-bd5a-dc4cae3d40ca" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.817726 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xhpfh" podStartSLOduration=115.817695057 podStartE2EDuration="1m55.817695057s" podCreationTimestamp="2025-12-09 14:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:08.768098057 +0000 UTC m=+136.491802946" watchObservedRunningTime="2025-12-09 14:58:08.817695057 +0000 UTC m=+136.541399936" Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.818654 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-dmt84" podStartSLOduration=115.818642332 podStartE2EDuration="1m55.818642332s" podCreationTimestamp="2025-12-09 14:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:08.817219004 +0000 UTC m=+136.540923893" watchObservedRunningTime="2025-12-09 14:58:08.818642332 +0000 UTC m=+136.542347221" Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.842423 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-9jpg8" podStartSLOduration=115.842396264 podStartE2EDuration="1m55.842396264s" podCreationTimestamp="2025-12-09 14:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:08.839906057 +0000 UTC m=+136.563610946" watchObservedRunningTime="2025-12-09 14:58:08.842396264 +0000 UTC m=+136.566101153" Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.901293 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:08 crc kubenswrapper[5107]: E1209 14:58:08.924834 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:09.42480455 +0000 UTC m=+137.148509439 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.934047 5107 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-bftlm container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.936934 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-bftlm" podUID="e8fd93d9-b829-40ad-8bf1-5e7d231b22fd" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.939528 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-bftlm" Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.939590 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-9jpg8" event={"ID":"fa5aa762-dad4-4341-abc7-294aaa80993e","Type":"ContainerStarted","Data":"70c7bd7aa0897bbc69789cbddc68abc8f2ebfbb586b688de9928002464d91702"} Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.939643 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-9jpg8" event={"ID":"fa5aa762-dad4-4341-abc7-294aaa80993e","Type":"ContainerStarted","Data":"66de8e02ee4d9d8b36a258c1b5722d7b7d8f85e93d0bc9496a3b22f13cf76ecf"} Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.939659 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-rr44v" event={"ID":"a00d0805-a28c-4412-a7c8-bc23c90e3bff","Type":"ContainerStarted","Data":"b6f3c1b583ed4f7a66c665d4d24aba1be15d49e0bce6e1f0254425d31576443f"} Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.939673 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-bftlm" event={"ID":"e8fd93d9-b829-40ad-8bf1-5e7d231b22fd","Type":"ContainerStarted","Data":"259571424a2abfc5cf8383f673ba1bc5307527ad6627129c2a9d32664ac64667"} Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.939689 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-bftlm" event={"ID":"e8fd93d9-b829-40ad-8bf1-5e7d231b22fd","Type":"ContainerStarted","Data":"a237c61a790876e8cfdf741dfeb22a21cbcad8254ca5fa8338fb05cc35f8f950"} Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.947250 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-vptvb" event={"ID":"0b29ecdf-6004-475e-8bcb-5fffa678a02b","Type":"ContainerStarted","Data":"bf2904ed53d6c4b936a0beb3243fced515b65705ae67b2c3ded7e531c0087448"} Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.956803 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-v7bbp" event={"ID":"e2de0479-bb6f-4d4f-a2a3-4f145f4cc49d","Type":"ContainerStarted","Data":"e340c85583249293db3e43884307b8d20a0ac9740bf4f3f68ca165c6e8d46840"} Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.959202 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-zjd8q" event={"ID":"ee76cc25-84c6-48a3-874b-4d310ae15a1e","Type":"ContainerStarted","Data":"1075efbac552545afeed7074f09e856c3a7023465d705cdd7eea242a88d8d2c9"} Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.962857 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-22x2p" event={"ID":"ecdcb3b6-9cf9-4930-b1f7-8cfcc94fa150","Type":"ContainerStarted","Data":"e0d0e84d324e28af9b9cd2d584b20a31c1331d2975f8c58dc26cc249e47f9176"} Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.966352 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-vnd9g" event={"ID":"894bdfb3-b1c7-419e-8f5c-7788f22807af","Type":"ContainerStarted","Data":"d18a931d561c87368c8e19b7e210574ec1bbba91c84c0e3d3a327d6822a5dba2"} Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.985036 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-94qbq" event={"ID":"de306a6c-37e9-4adb-bd62-44825e0df8c1","Type":"ContainerStarted","Data":"5f1c9b457c586ad2d6438ede0efdfdec7d74d8566b75da8cba831ce8e1b50379"} Dec 09 14:58:08 crc kubenswrapper[5107]: I1209 14:58:08.994542 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421525-2b9xz" event={"ID":"de384ce1-a016-4108-bb34-bf9475a09c66","Type":"ContainerStarted","Data":"8f0ccd9da6e75e534f8742765e0c9370acb53a3fa48c5497d533ad486a13b5e3"} Dec 09 14:58:09 crc kubenswrapper[5107]: E1209 14:58:09.004707 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:09.504670268 +0000 UTC m=+137.228375157 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:09 crc kubenswrapper[5107]: I1209 14:58:09.004751 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:09 crc kubenswrapper[5107]: I1209 14:58:09.005763 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:09 crc kubenswrapper[5107]: E1209 14:58:09.006764 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:09.506722694 +0000 UTC m=+137.230427583 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:09 crc kubenswrapper[5107]: I1209 14:58:09.007781 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-bbh5g" event={"ID":"75da01f5-f3ad-49af-a574-6581b6a58ca2","Type":"ContainerStarted","Data":"ba3cb37dc0463eed45537472caca5e510056947060e7df28631e985a5b8b2b04"} Dec 09 14:58:09 crc kubenswrapper[5107]: I1209 14:58:09.007847 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-bbh5g" event={"ID":"75da01f5-f3ad-49af-a574-6581b6a58ca2","Type":"ContainerStarted","Data":"f33404e346884a2c8117b43933c1ad27029b4bd8cb3ed85b3912ab96a0e9399d"} Dec 09 14:58:09 crc kubenswrapper[5107]: I1209 14:58:09.017540 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9lrlj" event={"ID":"cb028dc6-bfe0-4ca9-8e81-4b2a9b954524","Type":"ContainerStarted","Data":"56c05abc256d3c26e533deb782c0c61210b4e1c1cc2f4e250c6059a7db01d309"} Dec 09 14:58:09 crc kubenswrapper[5107]: I1209 14:58:09.019792 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-qgjdb" event={"ID":"c5c53b7a-ff7b-4dcc-8bf2-fe1a79939427","Type":"ContainerStarted","Data":"c578255168431f2a0f08eacc11c0f58cf6c3501fb506c4f515876d4b487980ef"} Dec 09 14:58:09 crc kubenswrapper[5107]: I1209 14:58:09.021087 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9lxhm" event={"ID":"a67c3bfc-4a29-4bc3-b7ae-527064c5aeb9","Type":"ContainerStarted","Data":"0fdcd61de1ca143aaa9acc414383f305a229b7fb93b47a9ff6be16437f85421c"} Dec 09 14:58:09 crc kubenswrapper[5107]: I1209 14:58:09.025673 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-xbd5v" event={"ID":"8b5927af-067e-4ec9-9774-89f6357ce9f1","Type":"ContainerStarted","Data":"dad81e88d1145b8e6bc3fa5bc9c1ed69249a726af02464ef39cabb5abe8e71e0"} Dec 09 14:58:09 crc kubenswrapper[5107]: I1209 14:58:09.025741 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-xbd5v" event={"ID":"8b5927af-067e-4ec9-9774-89f6357ce9f1","Type":"ContainerStarted","Data":"6ddd057bb762a589bff02cdb6aad0fb240fb85e3a3a5591582babdefa6dde9fe"} Dec 09 14:58:09 crc kubenswrapper[5107]: I1209 14:58:09.036373 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-6scqm" event={"ID":"561ee952-c55f-43a8-bf1a-9aa3d3f7aafa","Type":"ContainerStarted","Data":"e89183988f80f4b3e2079311e4c3f72c12a4d675a14a495f6733c965c685cb3a"} Dec 09 14:58:09 crc kubenswrapper[5107]: I1209 14:58:09.038063 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-m6g6q" event={"ID":"d59be405-9fc2-438f-aa97-c461c35a2f61","Type":"ContainerStarted","Data":"da517b88f5cba511e46f744e2dea22a2d223020df25903a1c45b9ad9f57872bc"} Dec 09 14:58:09 crc kubenswrapper[5107]: I1209 14:58:09.040541 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-2kdw2" event={"ID":"f5135d4a-e7f3-4ba7-9758-ec80e3beb22e","Type":"ContainerStarted","Data":"c4f79180949cb8fa7b719666f9b83e0fbc00b993912aae8dd4b6f9929e16aa4c"} Dec 09 14:58:09 crc kubenswrapper[5107]: I1209 14:58:09.046220 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-xdkgf" event={"ID":"c68ff193-fd5f-4a85-8713-20ef57d86ab8","Type":"ContainerStarted","Data":"f9853d20fca0775dc55fac3e3337720fd11db3b2c3a2e221e81f7603571e9f48"} Dec 09 14:58:09 crc kubenswrapper[5107]: I1209 14:58:09.047840 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-fnsxn" event={"ID":"f50117dc-dbba-4bb9-9335-fc47f0b9ad48","Type":"ContainerStarted","Data":"950bb64ae6590b6586cdf816404cda4ab1b25a3d8a51d34f2f03caa6360455ae"} Dec 09 14:58:09 crc kubenswrapper[5107]: I1209 14:58:09.049064 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-8hbcw" event={"ID":"804c8bba-5b68-4a3e-8060-e209d86f3d38","Type":"ContainerStarted","Data":"37fb553ce88ae71f822bd4fb71571269164ca1e8d3521967588cb6bd35252f0c"} Dec 09 14:58:09 crc kubenswrapper[5107]: I1209 14:58:09.109233 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:09 crc kubenswrapper[5107]: E1209 14:58:09.109207 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:09.609142141 +0000 UTC m=+137.332847030 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:09 crc kubenswrapper[5107]: I1209 14:58:09.109796 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:09 crc kubenswrapper[5107]: E1209 14:58:09.110378 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:09.610368534 +0000 UTC m=+137.334073423 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:09 crc kubenswrapper[5107]: I1209 14:58:09.154003 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-bftlm" podStartSLOduration=116.153977912 podStartE2EDuration="1m56.153977912s" podCreationTimestamp="2025-12-09 14:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:09.152495942 +0000 UTC m=+136.876200831" watchObservedRunningTime="2025-12-09 14:58:09.153977912 +0000 UTC m=+136.877682801" Dec 09 14:58:09 crc kubenswrapper[5107]: I1209 14:58:09.212751 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:09 crc kubenswrapper[5107]: E1209 14:58:09.213383 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:09.713323845 +0000 UTC m=+137.437028744 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:09 crc kubenswrapper[5107]: I1209 14:58:09.213498 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:09 crc kubenswrapper[5107]: E1209 14:58:09.214523 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:09.714485687 +0000 UTC m=+137.438190576 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:09 crc kubenswrapper[5107]: I1209 14:58:09.315234 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:09 crc kubenswrapper[5107]: E1209 14:58:09.315400 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:09.815370372 +0000 UTC m=+137.539075271 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:09 crc kubenswrapper[5107]: I1209 14:58:09.315808 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:09 crc kubenswrapper[5107]: E1209 14:58:09.316365 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:09.816355259 +0000 UTC m=+137.540060158 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:09 crc kubenswrapper[5107]: I1209 14:58:09.418682 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:09 crc kubenswrapper[5107]: E1209 14:58:09.419085 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:09.919068324 +0000 UTC m=+137.642773213 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:09 crc kubenswrapper[5107]: I1209 14:58:09.520215 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:09 crc kubenswrapper[5107]: E1209 14:58:09.520665 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:10.020649828 +0000 UTC m=+137.744354717 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:09 crc kubenswrapper[5107]: I1209 14:58:09.621909 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:09 crc kubenswrapper[5107]: E1209 14:58:09.622179 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:10.12214177 +0000 UTC m=+137.845846669 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:09 crc kubenswrapper[5107]: I1209 14:58:09.622902 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:09 crc kubenswrapper[5107]: E1209 14:58:09.623359 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:10.123327902 +0000 UTC m=+137.847032781 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:09 crc kubenswrapper[5107]: I1209 14:58:09.724458 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:09 crc kubenswrapper[5107]: E1209 14:58:09.724768 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:10.224728032 +0000 UTC m=+137.948432931 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:09 crc kubenswrapper[5107]: I1209 14:58:09.788154 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9lrlj" Dec 09 14:58:09 crc kubenswrapper[5107]: I1209 14:58:09.789570 5107 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-9lrlj container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Dec 09 14:58:09 crc kubenswrapper[5107]: I1209 14:58:09.789617 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9lrlj" podUID="cb028dc6-bfe0-4ca9-8e81-4b2a9b954524" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" Dec 09 14:58:09 crc kubenswrapper[5107]: I1209 14:58:09.806283 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-7hwmg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:58:09 crc kubenswrapper[5107]: [-]has-synced failed: reason withheld Dec 09 14:58:09 crc kubenswrapper[5107]: [+]process-running ok Dec 09 14:58:09 crc kubenswrapper[5107]: healthz check failed Dec 09 14:58:09 crc kubenswrapper[5107]: I1209 14:58:09.806413 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-7hwmg" podUID="43ec8aae-6bc0-438d-84c5-63ef04ca4db9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:58:09 crc kubenswrapper[5107]: I1209 14:58:09.806517 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-bxvnm" Dec 09 14:58:09 crc kubenswrapper[5107]: I1209 14:58:09.826076 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-xbd5v" podStartSLOduration=116.826039379 podStartE2EDuration="1m56.826039379s" podCreationTimestamp="2025-12-09 14:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:09.823432258 +0000 UTC m=+137.547137147" watchObservedRunningTime="2025-12-09 14:58:09.826039379 +0000 UTC m=+137.549744278" Dec 09 14:58:09 crc kubenswrapper[5107]: I1209 14:58:09.828748 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:09 crc kubenswrapper[5107]: E1209 14:58:09.829346 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:10.329311547 +0000 UTC m=+138.053016436 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:09 crc kubenswrapper[5107]: I1209 14:58:09.872426 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-xdkgf" podStartSLOduration=117.872399971 podStartE2EDuration="1m57.872399971s" podCreationTimestamp="2025-12-09 14:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:09.871229479 +0000 UTC m=+137.594934368" watchObservedRunningTime="2025-12-09 14:58:09.872399971 +0000 UTC m=+137.596104860" Dec 09 14:58:09 crc kubenswrapper[5107]: I1209 14:58:09.909528 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-94qbq" podStartSLOduration=116.909507273 podStartE2EDuration="1m56.909507273s" podCreationTimestamp="2025-12-09 14:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:09.907666034 +0000 UTC m=+137.631370923" watchObservedRunningTime="2025-12-09 14:58:09.909507273 +0000 UTC m=+137.633212162" Dec 09 14:58:09 crc kubenswrapper[5107]: I1209 14:58:09.929996 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:09 crc kubenswrapper[5107]: E1209 14:58:09.930144 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:10.430123161 +0000 UTC m=+138.153828060 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:09 crc kubenswrapper[5107]: I1209 14:58:09.930508 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:09 crc kubenswrapper[5107]: E1209 14:58:09.935491 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:10.435468425 +0000 UTC m=+138.159173314 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:09 crc kubenswrapper[5107]: I1209 14:58:09.971309 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-2kdw2" podStartSLOduration=116.971286643 podStartE2EDuration="1m56.971286643s" podCreationTimestamp="2025-12-09 14:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:09.968050145 +0000 UTC m=+137.691755034" watchObservedRunningTime="2025-12-09 14:58:09.971286643 +0000 UTC m=+137.694991522" Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.000185 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-vptvb" podStartSLOduration=8.000164933 podStartE2EDuration="8.000164933s" podCreationTimestamp="2025-12-09 14:58:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:09.998438396 +0000 UTC m=+137.722143285" watchObservedRunningTime="2025-12-09 14:58:10.000164933 +0000 UTC m=+137.723869832" Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.035028 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:10 crc kubenswrapper[5107]: E1209 14:58:10.035469 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:10.535446206 +0000 UTC m=+138.259151095 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.078092 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-bbh5g" podStartSLOduration=117.078070208 podStartE2EDuration="1m57.078070208s" podCreationTimestamp="2025-12-09 14:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:10.034304956 +0000 UTC m=+137.758009855" watchObservedRunningTime="2025-12-09 14:58:10.078070208 +0000 UTC m=+137.801775097" Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.078923 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-v7bbp" podStartSLOduration=117.07891901 podStartE2EDuration="1m57.07891901s" podCreationTimestamp="2025-12-09 14:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:10.076546007 +0000 UTC m=+137.800250906" watchObservedRunningTime="2025-12-09 14:58:10.07891901 +0000 UTC m=+137.802623899" Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.134422 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-vx5zg" event={"ID":"a967d951-470b-486f-8037-73dcbdb3e171","Type":"ContainerStarted","Data":"79588051f96c6c4667c297d5aae7c4ab65ede19cdfdc8d07bc1ec9ce5b240689"} Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.138786 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:10 crc kubenswrapper[5107]: E1209 14:58:10.140290 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:10.640268628 +0000 UTC m=+138.363973517 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.185645 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-rr44v" event={"ID":"a00d0805-a28c-4412-a7c8-bc23c90e3bff","Type":"ContainerStarted","Data":"d9a9a98535565fbae29e1836c47e359197c5b5b35417fd00630432cc2e82e6c4"} Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.231950 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-zjd8q" event={"ID":"ee76cc25-84c6-48a3-874b-4d310ae15a1e","Type":"ContainerStarted","Data":"fe567d66e2f8d4ac98692a8774a7627232b1e2266fb213ce5db6b52792c912ed"} Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.243310 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:10 crc kubenswrapper[5107]: E1209 14:58:10.243716 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:10.743686192 +0000 UTC m=+138.467391081 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.253117 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-22x2p" event={"ID":"ecdcb3b6-9cf9-4930-b1f7-8cfcc94fa150","Type":"ContainerStarted","Data":"1663056d1b7587494574a1659f64143086c864f0a757eb2868ef155d66761c58"} Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.278399 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-vnd9g" event={"ID":"894bdfb3-b1c7-419e-8f5c-7788f22807af","Type":"ContainerStarted","Data":"109ef8f89b37f2f40f9e7df1cd85c0fd2c3fc18d445dd9f389b1dabb8cc2b29f"} Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.318850 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-94qbq" event={"ID":"de306a6c-37e9-4adb-bd62-44825e0df8c1","Type":"ContainerStarted","Data":"2f6c07a540b2183df568054549dc2be8ce21c6cd381bd570bfd67d73c1415fb5"} Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.320540 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-94qbq" Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.343694 5107 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-94qbq container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" start-of-body= Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.343774 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-94qbq" podUID="de306a6c-37e9-4adb-bd62-44825e0df8c1" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.344456 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421525-2b9xz" event={"ID":"de384ce1-a016-4108-bb34-bf9475a09c66","Type":"ContainerStarted","Data":"35fed9e87449527267df7d55fd5619c3053673aa9909829ce4adf889b55c0e13"} Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.346004 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:10 crc kubenswrapper[5107]: E1209 14:58:10.351081 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:10.851061403 +0000 UTC m=+138.574766292 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.367670 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9lrlj" event={"ID":"cb028dc6-bfe0-4ca9-8e81-4b2a9b954524","Type":"ContainerStarted","Data":"695fd746b9a32eb7383bfb21e870b4466853039bce68a7a6615cc4f8f3d455a3"} Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.372887 5107 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-9lrlj container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.372981 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9lrlj" podUID="cb028dc6-bfe0-4ca9-8e81-4b2a9b954524" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.393826 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-qgjdb" event={"ID":"c5c53b7a-ff7b-4dcc-8bf2-fe1a79939427","Type":"ContainerStarted","Data":"4eb1ffbc142813894aab27f58280e53424be12c4e435d48526720113f337fc93"} Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.396589 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9lxhm" event={"ID":"a67c3bfc-4a29-4bc3-b7ae-527064c5aeb9","Type":"ContainerStarted","Data":"c68dfd95d3b97aab35eaf09d623e1584fbbadb2e3f337843be78f1dd8dd6ac7c"} Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.399235 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-6scqm" event={"ID":"561ee952-c55f-43a8-bf1a-9aa3d3f7aafa","Type":"ContainerStarted","Data":"5eebd1d1ee9c4e8197960118109e892a09f5f0278fc3987229fbdfc049a86de7"} Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.404000 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9lrlj" podStartSLOduration=117.403978893 podStartE2EDuration="1m57.403978893s" podCreationTimestamp="2025-12-09 14:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:10.308077032 +0000 UTC m=+138.031781921" watchObservedRunningTime="2025-12-09 14:58:10.403978893 +0000 UTC m=+138.127683782" Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.448557 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:10 crc kubenswrapper[5107]: E1209 14:58:10.449394 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:10.949368619 +0000 UTC m=+138.673073508 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.451842 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-m6g6q" event={"ID":"d59be405-9fc2-438f-aa97-c461c35a2f61","Type":"ContainerStarted","Data":"390daf384b5bc279425b62d0bce7dfd89d2a97968ff2ebfe72e7b5079438380c"} Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.459658 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-2kdw2" event={"ID":"f5135d4a-e7f3-4ba7-9758-ec80e3beb22e","Type":"ContainerStarted","Data":"ca9266627f0878831596ffb24d3d4cb7ff5d942e429eeb239e034001a2a7a314"} Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.467674 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-fnsxn" event={"ID":"f50117dc-dbba-4bb9-9335-fc47f0b9ad48","Type":"ContainerStarted","Data":"ebf82396d3947c538249f487e1f1199f24ce34c0184a9c55ccb4ccc19cf80836"} Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.470260 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-fnsxn" Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.472618 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-8hbcw" event={"ID":"804c8bba-5b68-4a3e-8060-e209d86f3d38","Type":"ContainerStarted","Data":"00bfe89bdeff24111be3cc0bc47537628096f76159278a675500ab9abcd13df5"} Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.509851 5107 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-fnsxn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.510237 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-fnsxn" podUID="f50117dc-dbba-4bb9-9335-fc47f0b9ad48" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.518515 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-rr44v" podStartSLOduration=117.518493196 podStartE2EDuration="1m57.518493196s" podCreationTimestamp="2025-12-09 14:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:10.516954964 +0000 UTC m=+138.240659853" watchObservedRunningTime="2025-12-09 14:58:10.518493196 +0000 UTC m=+138.242198085" Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.519641 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-zjd8q" podStartSLOduration=117.519635427 podStartE2EDuration="1m57.519635427s" podCreationTimestamp="2025-12-09 14:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:10.407046326 +0000 UTC m=+138.130751225" watchObservedRunningTime="2025-12-09 14:58:10.519635427 +0000 UTC m=+138.243340316" Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.544708 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-v799x" event={"ID":"eaeb35b9-8034-4a9d-9cd5-ca6d11970674","Type":"ContainerStarted","Data":"8ce8cf31a86826a94ba5ca286e602f70986dcb1b89bca2d10c63de26122c393d"} Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.547641 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-bbh5g" Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.550423 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:10 crc kubenswrapper[5107]: E1209 14:58:10.556547 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:11.056508624 +0000 UTC m=+138.780213503 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.558486 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-vptvb" Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.560207 5107 patch_prober.go:28] interesting pod/console-operator-67c89758df-dmt84 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.560298 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-dmt84" podUID="7bad4c91-a636-48b6-bd5a-dc4cae3d40ca" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.562107 5107 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-bftlm container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.562165 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-bftlm" podUID="e8fd93d9-b829-40ad-8bf1-5e7d231b22fd" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.601524 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-fnsxn" podStartSLOduration=117.601501929 podStartE2EDuration="1m57.601501929s" podCreationTimestamp="2025-12-09 14:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:10.562496675 +0000 UTC m=+138.286201574" watchObservedRunningTime="2025-12-09 14:58:10.601501929 +0000 UTC m=+138.325206818" Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.602451 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29421525-2b9xz" podStartSLOduration=118.602443414 podStartE2EDuration="1m58.602443414s" podCreationTimestamp="2025-12-09 14:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:10.600570494 +0000 UTC m=+138.324275383" watchObservedRunningTime="2025-12-09 14:58:10.602443414 +0000 UTC m=+138.326148303" Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.639404 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-6scqm" podStartSLOduration=8.639383723 podStartE2EDuration="8.639383723s" podCreationTimestamp="2025-12-09 14:58:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:10.638098168 +0000 UTC m=+138.361803057" watchObservedRunningTime="2025-12-09 14:58:10.639383723 +0000 UTC m=+138.363088612" Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.651908 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:10 crc kubenswrapper[5107]: E1209 14:58:10.653979 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:11.153955996 +0000 UTC m=+138.877660885 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.690773 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-vptvb" Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.706677 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-m6g6q" podStartSLOduration=117.70663757 podStartE2EDuration="1m57.70663757s" podCreationTimestamp="2025-12-09 14:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:10.700766701 +0000 UTC m=+138.424471600" watchObservedRunningTime="2025-12-09 14:58:10.70663757 +0000 UTC m=+138.430342459" Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.725746 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-bbh5g" Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.778251 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:10 crc kubenswrapper[5107]: E1209 14:58:10.778595 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:11.278582403 +0000 UTC m=+139.002287292 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.791805 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-7hwmg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:58:10 crc kubenswrapper[5107]: [-]has-synced failed: reason withheld Dec 09 14:58:10 crc kubenswrapper[5107]: [+]process-running ok Dec 09 14:58:10 crc kubenswrapper[5107]: healthz check failed Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.791893 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-7hwmg" podUID="43ec8aae-6bc0-438d-84c5-63ef04ca4db9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.848796 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-9lxhm" podStartSLOduration=117.848766259 podStartE2EDuration="1m57.848766259s" podCreationTimestamp="2025-12-09 14:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:10.784049461 +0000 UTC m=+138.507754350" watchObservedRunningTime="2025-12-09 14:58:10.848766259 +0000 UTC m=+138.572471148" Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.881515 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:10 crc kubenswrapper[5107]: E1209 14:58:10.881990 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:11.381960006 +0000 UTC m=+139.105664895 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:10 crc kubenswrapper[5107]: I1209 14:58:10.984490 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:10 crc kubenswrapper[5107]: E1209 14:58:10.985162 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:11.485144234 +0000 UTC m=+139.208849113 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:11 crc kubenswrapper[5107]: I1209 14:58:11.086261 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:11 crc kubenswrapper[5107]: E1209 14:58:11.086533 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:11.586493801 +0000 UTC m=+139.310198690 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:11 crc kubenswrapper[5107]: I1209 14:58:11.087052 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:11 crc kubenswrapper[5107]: E1209 14:58:11.087558 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:11.58755084 +0000 UTC m=+139.311255729 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:11 crc kubenswrapper[5107]: I1209 14:58:11.187931 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:11 crc kubenswrapper[5107]: E1209 14:58:11.188183 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:11.688141498 +0000 UTC m=+139.411846387 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:11 crc kubenswrapper[5107]: I1209 14:58:11.188741 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:11 crc kubenswrapper[5107]: E1209 14:58:11.189087 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:11.689063762 +0000 UTC m=+139.412768651 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:11 crc kubenswrapper[5107]: I1209 14:58:11.285688 5107 ???:1] "http: TLS handshake error from 192.168.126.11:37244: no serving certificate available for the kubelet" Dec 09 14:58:11 crc kubenswrapper[5107]: I1209 14:58:11.290244 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:11 crc kubenswrapper[5107]: E1209 14:58:11.290457 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:11.790415241 +0000 UTC m=+139.514120130 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:11 crc kubenswrapper[5107]: I1209 14:58:11.290933 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:11 crc kubenswrapper[5107]: E1209 14:58:11.291303 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:11.791287584 +0000 UTC m=+139.514992473 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:11 crc kubenswrapper[5107]: I1209 14:58:11.392370 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:11 crc kubenswrapper[5107]: E1209 14:58:11.392541 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:11.892513359 +0000 UTC m=+139.616218248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:11 crc kubenswrapper[5107]: I1209 14:58:11.392870 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:11 crc kubenswrapper[5107]: E1209 14:58:11.393161 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:11.893153126 +0000 UTC m=+139.616858005 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:11 crc kubenswrapper[5107]: I1209 14:58:11.494494 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:11 crc kubenswrapper[5107]: E1209 14:58:11.494708 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:11.9946786 +0000 UTC m=+139.718383489 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:11 crc kubenswrapper[5107]: I1209 14:58:11.495012 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:11 crc kubenswrapper[5107]: E1209 14:58:11.495467 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:11.9954505 +0000 UTC m=+139.719155399 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:11 crc kubenswrapper[5107]: I1209 14:58:11.571386 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-8hbcw" event={"ID":"804c8bba-5b68-4a3e-8060-e209d86f3d38","Type":"ContainerStarted","Data":"b68dc3b4591f173fe5606b1cd18a968f7695b8ed746056c8fccf1e6fd8b2cf5c"} Dec 09 14:58:11 crc kubenswrapper[5107]: I1209 14:58:11.589587 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-v799x" event={"ID":"eaeb35b9-8034-4a9d-9cd5-ca6d11970674","Type":"ContainerStarted","Data":"50dec84cce0095008cff62a01376e87547d1eca8c68633b45df11d04ee76ba04"} Dec 09 14:58:11 crc kubenswrapper[5107]: I1209 14:58:11.589707 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-v799x" Dec 09 14:58:11 crc kubenswrapper[5107]: I1209 14:58:11.596557 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:11 crc kubenswrapper[5107]: E1209 14:58:11.596867 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:12.096817449 +0000 UTC m=+139.820522338 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:11 crc kubenswrapper[5107]: I1209 14:58:11.597035 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:11 crc kubenswrapper[5107]: E1209 14:58:11.597607 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:12.097598949 +0000 UTC m=+139.821303838 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:11 crc kubenswrapper[5107]: I1209 14:58:11.600070 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-vx5zg" event={"ID":"a967d951-470b-486f-8037-73dcbdb3e171","Type":"ContainerStarted","Data":"0df7b8a83bd637bacee6d7a344e28a6191039e4258ec07a728d2612f71212bb6"} Dec 09 14:58:11 crc kubenswrapper[5107]: I1209 14:58:11.603218 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-22x2p" event={"ID":"ecdcb3b6-9cf9-4930-b1f7-8cfcc94fa150","Type":"ContainerStarted","Data":"82db5d574244bdf2839abbf0c669fca402c324b02c1fe61937dfda69dfbaa998"} Dec 09 14:58:11 crc kubenswrapper[5107]: I1209 14:58:11.603774 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-22x2p" Dec 09 14:58:11 crc kubenswrapper[5107]: I1209 14:58:11.606316 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-vnd9g" event={"ID":"894bdfb3-b1c7-419e-8f5c-7788f22807af","Type":"ContainerStarted","Data":"3a8672c11dedcb68ca9bb8b3fbe3abe5d39c17180f7fccf3397328a849b24b92"} Dec 09 14:58:11 crc kubenswrapper[5107]: I1209 14:58:11.609546 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-qgjdb" event={"ID":"c5c53b7a-ff7b-4dcc-8bf2-fe1a79939427","Type":"ContainerStarted","Data":"ed8ceece94a78101bc2f342144eda103a485d79105ffacf68b4f9d50318a3c0a"} Dec 09 14:58:11 crc kubenswrapper[5107]: I1209 14:58:11.613191 5107 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-fnsxn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Dec 09 14:58:11 crc kubenswrapper[5107]: I1209 14:58:11.613292 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-fnsxn" podUID="f50117dc-dbba-4bb9-9335-fc47f0b9ad48" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Dec 09 14:58:11 crc kubenswrapper[5107]: I1209 14:58:11.613202 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-8hbcw" podStartSLOduration=118.613184 podStartE2EDuration="1m58.613184s" podCreationTimestamp="2025-12-09 14:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:11.611535706 +0000 UTC m=+139.335240595" watchObservedRunningTime="2025-12-09 14:58:11.613184 +0000 UTC m=+139.336888889" Dec 09 14:58:11 crc kubenswrapper[5107]: I1209 14:58:11.698275 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:11 crc kubenswrapper[5107]: E1209 14:58:11.699779 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:12.199763 +0000 UTC m=+139.923467889 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:11 crc kubenswrapper[5107]: I1209 14:58:11.708943 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-qgjdb" podStartSLOduration=118.708921887 podStartE2EDuration="1m58.708921887s" podCreationTimestamp="2025-12-09 14:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:11.707621542 +0000 UTC m=+139.431326431" watchObservedRunningTime="2025-12-09 14:58:11.708921887 +0000 UTC m=+139.432626776" Dec 09 14:58:11 crc kubenswrapper[5107]: I1209 14:58:11.709775 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-vnd9g" podStartSLOduration=118.70976785 podStartE2EDuration="1m58.70976785s" podCreationTimestamp="2025-12-09 14:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:11.669722738 +0000 UTC m=+139.393427647" watchObservedRunningTime="2025-12-09 14:58:11.70976785 +0000 UTC m=+139.433472739" Dec 09 14:58:11 crc kubenswrapper[5107]: I1209 14:58:11.752808 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-v799x" podStartSLOduration=9.752788482 podStartE2EDuration="9.752788482s" podCreationTimestamp="2025-12-09 14:58:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:11.748032824 +0000 UTC m=+139.471737713" watchObservedRunningTime="2025-12-09 14:58:11.752788482 +0000 UTC m=+139.476493371" Dec 09 14:58:11 crc kubenswrapper[5107]: I1209 14:58:11.797377 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-7hwmg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:58:11 crc kubenswrapper[5107]: [-]has-synced failed: reason withheld Dec 09 14:58:11 crc kubenswrapper[5107]: [+]process-running ok Dec 09 14:58:11 crc kubenswrapper[5107]: healthz check failed Dec 09 14:58:11 crc kubenswrapper[5107]: I1209 14:58:11.797449 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-7hwmg" podUID="43ec8aae-6bc0-438d-84c5-63ef04ca4db9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:58:11 crc kubenswrapper[5107]: I1209 14:58:11.800709 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:11 crc kubenswrapper[5107]: E1209 14:58:11.803414 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:12.303398159 +0000 UTC m=+140.027103058 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:11 crc kubenswrapper[5107]: I1209 14:58:11.873937 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-vx5zg" podStartSLOduration=118.873901674 podStartE2EDuration="1m58.873901674s" podCreationTimestamp="2025-12-09 14:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:11.804320575 +0000 UTC m=+139.528025474" watchObservedRunningTime="2025-12-09 14:58:11.873901674 +0000 UTC m=+139.597606583" Dec 09 14:58:11 crc kubenswrapper[5107]: I1209 14:58:11.901417 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-vptvb"] Dec 09 14:58:11 crc kubenswrapper[5107]: I1209 14:58:11.902133 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:11 crc kubenswrapper[5107]: E1209 14:58:11.902751 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:12.402729653 +0000 UTC m=+140.126434552 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:11 crc kubenswrapper[5107]: I1209 14:58:11.911406 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-22x2p" podStartSLOduration=118.911383227 podStartE2EDuration="1m58.911383227s" podCreationTimestamp="2025-12-09 14:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:11.880963096 +0000 UTC m=+139.604667995" watchObservedRunningTime="2025-12-09 14:58:11.911383227 +0000 UTC m=+139.635088136" Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.004515 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:12 crc kubenswrapper[5107]: E1209 14:58:12.004989 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:12.504970115 +0000 UTC m=+140.228675004 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.015086 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9lrlj" Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.105974 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:12 crc kubenswrapper[5107]: E1209 14:58:12.106357 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:12.606310913 +0000 UTC m=+140.330015802 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.207883 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:12 crc kubenswrapper[5107]: E1209 14:58:12.208306 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:12.708288518 +0000 UTC m=+140.431993407 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.309297 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:12 crc kubenswrapper[5107]: E1209 14:58:12.309493 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:12.809465402 +0000 UTC m=+140.533170291 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.309785 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:12 crc kubenswrapper[5107]: E1209 14:58:12.310076 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:12.810068048 +0000 UTC m=+140.533772937 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.393349 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z7hcq"] Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.410447 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z7hcq" Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.411103 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:12 crc kubenswrapper[5107]: E1209 14:58:12.411680 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:12.911664343 +0000 UTC m=+140.635369232 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.430783 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-dmt84" Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.466089 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.477015 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z7hcq"] Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.514474 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21f1c435-27a8-4463-97da-af76d49f0e7a-utilities\") pod \"certified-operators-z7hcq\" (UID: \"21f1c435-27a8-4463-97da-af76d49f0e7a\") " pod="openshift-marketplace/certified-operators-z7hcq" Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.514595 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21f1c435-27a8-4463-97da-af76d49f0e7a-catalog-content\") pod \"certified-operators-z7hcq\" (UID: \"21f1c435-27a8-4463-97da-af76d49f0e7a\") " pod="openshift-marketplace/certified-operators-z7hcq" Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.514635 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.514694 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5xgm\" (UniqueName: \"kubernetes.io/projected/21f1c435-27a8-4463-97da-af76d49f0e7a-kube-api-access-t5xgm\") pod \"certified-operators-z7hcq\" (UID: \"21f1c435-27a8-4463-97da-af76d49f0e7a\") " pod="openshift-marketplace/certified-operators-z7hcq" Dec 09 14:58:12 crc kubenswrapper[5107]: E1209 14:58:12.515173 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:13.015153368 +0000 UTC m=+140.738858257 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.611423 5107 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-94qbq container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": context deadline exceeded" start-of-body= Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.612016 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-94qbq" podUID="de306a6c-37e9-4adb-bd62-44825e0df8c1" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": context deadline exceeded" Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.615992 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.616225 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21f1c435-27a8-4463-97da-af76d49f0e7a-utilities\") pod \"certified-operators-z7hcq\" (UID: \"21f1c435-27a8-4463-97da-af76d49f0e7a\") " pod="openshift-marketplace/certified-operators-z7hcq" Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.616356 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21f1c435-27a8-4463-97da-af76d49f0e7a-catalog-content\") pod \"certified-operators-z7hcq\" (UID: \"21f1c435-27a8-4463-97da-af76d49f0e7a\") " pod="openshift-marketplace/certified-operators-z7hcq" Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.616583 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t5xgm\" (UniqueName: \"kubernetes.io/projected/21f1c435-27a8-4463-97da-af76d49f0e7a-kube-api-access-t5xgm\") pod \"certified-operators-z7hcq\" (UID: \"21f1c435-27a8-4463-97da-af76d49f0e7a\") " pod="openshift-marketplace/certified-operators-z7hcq" Dec 09 14:58:12 crc kubenswrapper[5107]: E1209 14:58:12.617069 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:13.117048091 +0000 UTC m=+140.840752980 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.617628 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21f1c435-27a8-4463-97da-af76d49f0e7a-utilities\") pod \"certified-operators-z7hcq\" (UID: \"21f1c435-27a8-4463-97da-af76d49f0e7a\") " pod="openshift-marketplace/certified-operators-z7hcq" Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.617918 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21f1c435-27a8-4463-97da-af76d49f0e7a-catalog-content\") pod \"certified-operators-z7hcq\" (UID: \"21f1c435-27a8-4463-97da-af76d49f0e7a\") " pod="openshift-marketplace/certified-operators-z7hcq" Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.633365 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vmk4n"] Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.649107 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-g5bqc" event={"ID":"947d55c1-7cdf-48de-b10a-e783956ebbd8","Type":"ContainerStarted","Data":"715dd6c9fc00d791d92ab6a1ed349f38e2d08a8adc2b562a254052337b5e13c8"} Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.649352 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vmk4n" Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.653580 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vmk4n"] Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.658134 5107 generic.go:358] "Generic (PLEG): container finished" podID="de384ce1-a016-4108-bb34-bf9475a09c66" containerID="35fed9e87449527267df7d55fd5619c3053673aa9909829ce4adf889b55c0e13" exitCode=0 Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.658549 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421525-2b9xz" event={"ID":"de384ce1-a016-4108-bb34-bf9475a09c66","Type":"ContainerDied","Data":"35fed9e87449527267df7d55fd5619c3053673aa9909829ce4adf889b55c0e13"} Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.661463 5107 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-fnsxn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.661536 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-fnsxn" podUID="f50117dc-dbba-4bb9-9335-fc47f0b9ad48" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.690913 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.712036 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5xgm\" (UniqueName: \"kubernetes.io/projected/21f1c435-27a8-4463-97da-af76d49f0e7a-kube-api-access-t5xgm\") pod \"certified-operators-z7hcq\" (UID: \"21f1c435-27a8-4463-97da-af76d49f0e7a\") " pod="openshift-marketplace/certified-operators-z7hcq" Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.718396 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:12 crc kubenswrapper[5107]: E1209 14:58:12.718831 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:13.218818641 +0000 UTC m=+140.942523520 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.739720 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z7hcq" Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.751535 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lcv6m"] Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.780122 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lcv6m" Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.794567 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-7hwmg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:58:12 crc kubenswrapper[5107]: [-]has-synced failed: reason withheld Dec 09 14:58:12 crc kubenswrapper[5107]: [+]process-running ok Dec 09 14:58:12 crc kubenswrapper[5107]: healthz check failed Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.794657 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-7hwmg" podUID="43ec8aae-6bc0-438d-84c5-63ef04ca4db9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.798325 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lcv6m"] Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.820315 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.820716 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-954zm\" (UniqueName: \"kubernetes.io/projected/6243a4ba-2331-4ff6-8d02-0d7cda7bb73e-kube-api-access-954zm\") pod \"community-operators-vmk4n\" (UID: \"6243a4ba-2331-4ff6-8d02-0d7cda7bb73e\") " pod="openshift-marketplace/community-operators-vmk4n" Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.820937 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6243a4ba-2331-4ff6-8d02-0d7cda7bb73e-utilities\") pod \"community-operators-vmk4n\" (UID: \"6243a4ba-2331-4ff6-8d02-0d7cda7bb73e\") " pod="openshift-marketplace/community-operators-vmk4n" Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.821015 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6243a4ba-2331-4ff6-8d02-0d7cda7bb73e-catalog-content\") pod \"community-operators-vmk4n\" (UID: \"6243a4ba-2331-4ff6-8d02-0d7cda7bb73e\") " pod="openshift-marketplace/community-operators-vmk4n" Dec 09 14:58:12 crc kubenswrapper[5107]: E1209 14:58:12.831633 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:13.331612337 +0000 UTC m=+141.055317216 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.924121 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6243a4ba-2331-4ff6-8d02-0d7cda7bb73e-utilities\") pod \"community-operators-vmk4n\" (UID: \"6243a4ba-2331-4ff6-8d02-0d7cda7bb73e\") " pod="openshift-marketplace/community-operators-vmk4n" Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.924200 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6243a4ba-2331-4ff6-8d02-0d7cda7bb73e-catalog-content\") pod \"community-operators-vmk4n\" (UID: \"6243a4ba-2331-4ff6-8d02-0d7cda7bb73e\") " pod="openshift-marketplace/community-operators-vmk4n" Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.924243 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a7f2680-23c6-4334-a9dd-c4328ea41821-utilities\") pod \"certified-operators-lcv6m\" (UID: \"6a7f2680-23c6-4334-a9dd-c4328ea41821\") " pod="openshift-marketplace/certified-operators-lcv6m" Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.924316 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.924362 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl8q8\" (UniqueName: \"kubernetes.io/projected/6a7f2680-23c6-4334-a9dd-c4328ea41821-kube-api-access-kl8q8\") pod \"certified-operators-lcv6m\" (UID: \"6a7f2680-23c6-4334-a9dd-c4328ea41821\") " pod="openshift-marketplace/certified-operators-lcv6m" Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.924403 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a7f2680-23c6-4334-a9dd-c4328ea41821-catalog-content\") pod \"certified-operators-lcv6m\" (UID: \"6a7f2680-23c6-4334-a9dd-c4328ea41821\") " pod="openshift-marketplace/certified-operators-lcv6m" Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.924439 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-954zm\" (UniqueName: \"kubernetes.io/projected/6243a4ba-2331-4ff6-8d02-0d7cda7bb73e-kube-api-access-954zm\") pod \"community-operators-vmk4n\" (UID: \"6243a4ba-2331-4ff6-8d02-0d7cda7bb73e\") " pod="openshift-marketplace/community-operators-vmk4n" Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.925330 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6243a4ba-2331-4ff6-8d02-0d7cda7bb73e-utilities\") pod \"community-operators-vmk4n\" (UID: \"6243a4ba-2331-4ff6-8d02-0d7cda7bb73e\") " pod="openshift-marketplace/community-operators-vmk4n" Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.925597 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6243a4ba-2331-4ff6-8d02-0d7cda7bb73e-catalog-content\") pod \"community-operators-vmk4n\" (UID: \"6243a4ba-2331-4ff6-8d02-0d7cda7bb73e\") " pod="openshift-marketplace/community-operators-vmk4n" Dec 09 14:58:12 crc kubenswrapper[5107]: E1209 14:58:12.925881 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:13.425854424 +0000 UTC m=+141.149559313 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.988570 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-954zm\" (UniqueName: \"kubernetes.io/projected/6243a4ba-2331-4ff6-8d02-0d7cda7bb73e-kube-api-access-954zm\") pod \"community-operators-vmk4n\" (UID: \"6243a4ba-2331-4ff6-8d02-0d7cda7bb73e\") " pod="openshift-marketplace/community-operators-vmk4n" Dec 09 14:58:12 crc kubenswrapper[5107]: I1209 14:58:12.989935 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vmk4n" Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.012977 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2zc87"] Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.029943 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.030204 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kl8q8\" (UniqueName: \"kubernetes.io/projected/6a7f2680-23c6-4334-a9dd-c4328ea41821-kube-api-access-kl8q8\") pod \"certified-operators-lcv6m\" (UID: \"6a7f2680-23c6-4334-a9dd-c4328ea41821\") " pod="openshift-marketplace/certified-operators-lcv6m" Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.030249 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a7f2680-23c6-4334-a9dd-c4328ea41821-catalog-content\") pod \"certified-operators-lcv6m\" (UID: \"6a7f2680-23c6-4334-a9dd-c4328ea41821\") " pod="openshift-marketplace/certified-operators-lcv6m" Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.030387 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a7f2680-23c6-4334-a9dd-c4328ea41821-utilities\") pod \"certified-operators-lcv6m\" (UID: \"6a7f2680-23c6-4334-a9dd-c4328ea41821\") " pod="openshift-marketplace/certified-operators-lcv6m" Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.030830 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a7f2680-23c6-4334-a9dd-c4328ea41821-utilities\") pod \"certified-operators-lcv6m\" (UID: \"6a7f2680-23c6-4334-a9dd-c4328ea41821\") " pod="openshift-marketplace/certified-operators-lcv6m" Dec 09 14:58:13 crc kubenswrapper[5107]: E1209 14:58:13.030927 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:13.530906342 +0000 UTC m=+141.254611231 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.031529 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a7f2680-23c6-4334-a9dd-c4328ea41821-catalog-content\") pod \"certified-operators-lcv6m\" (UID: \"6a7f2680-23c6-4334-a9dd-c4328ea41821\") " pod="openshift-marketplace/certified-operators-lcv6m" Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.042112 5107 patch_prober.go:28] interesting pod/downloads-747b44746d-bg27m container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.042305 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-bg27m" podUID="b47c5069-df03-4bb4-9b81-2213e9d95183" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.091361 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2zc87"] Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.091539 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2zc87" Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.132041 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:13 crc kubenswrapper[5107]: E1209 14:58:13.132418 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:13.632404374 +0000 UTC m=+141.356109273 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.154046 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl8q8\" (UniqueName: \"kubernetes.io/projected/6a7f2680-23c6-4334-a9dd-c4328ea41821-kube-api-access-kl8q8\") pod \"certified-operators-lcv6m\" (UID: \"6a7f2680-23c6-4334-a9dd-c4328ea41821\") " pod="openshift-marketplace/certified-operators-lcv6m" Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.179598 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lcv6m" Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.233923 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:13 crc kubenswrapper[5107]: E1209 14:58:13.239074 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:13.739027425 +0000 UTC m=+141.462732314 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.245739 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c53c28b-bc39-454a-ad61-1de7109f45ee-catalog-content\") pod \"community-operators-2zc87\" (UID: \"7c53c28b-bc39-454a-ad61-1de7109f45ee\") " pod="openshift-marketplace/community-operators-2zc87" Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.245959 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.246188 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw8jm\" (UniqueName: \"kubernetes.io/projected/7c53c28b-bc39-454a-ad61-1de7109f45ee-kube-api-access-kw8jm\") pod \"community-operators-2zc87\" (UID: \"7c53c28b-bc39-454a-ad61-1de7109f45ee\") " pod="openshift-marketplace/community-operators-2zc87" Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.246557 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c53c28b-bc39-454a-ad61-1de7109f45ee-utilities\") pod \"community-operators-2zc87\" (UID: \"7c53c28b-bc39-454a-ad61-1de7109f45ee\") " pod="openshift-marketplace/community-operators-2zc87" Dec 09 14:58:13 crc kubenswrapper[5107]: E1209 14:58:13.247137 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:13.747114603 +0000 UTC m=+141.470819492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.347844 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.348324 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kw8jm\" (UniqueName: \"kubernetes.io/projected/7c53c28b-bc39-454a-ad61-1de7109f45ee-kube-api-access-kw8jm\") pod \"community-operators-2zc87\" (UID: \"7c53c28b-bc39-454a-ad61-1de7109f45ee\") " pod="openshift-marketplace/community-operators-2zc87" Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.348382 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c53c28b-bc39-454a-ad61-1de7109f45ee-utilities\") pod \"community-operators-2zc87\" (UID: \"7c53c28b-bc39-454a-ad61-1de7109f45ee\") " pod="openshift-marketplace/community-operators-2zc87" Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.348455 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c53c28b-bc39-454a-ad61-1de7109f45ee-catalog-content\") pod \"community-operators-2zc87\" (UID: \"7c53c28b-bc39-454a-ad61-1de7109f45ee\") " pod="openshift-marketplace/community-operators-2zc87" Dec 09 14:58:13 crc kubenswrapper[5107]: E1209 14:58:13.349210 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:13.849186401 +0000 UTC m=+141.572891290 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.349380 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c53c28b-bc39-454a-ad61-1de7109f45ee-catalog-content\") pod \"community-operators-2zc87\" (UID: \"7c53c28b-bc39-454a-ad61-1de7109f45ee\") " pod="openshift-marketplace/community-operators-2zc87" Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.349635 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c53c28b-bc39-454a-ad61-1de7109f45ee-utilities\") pod \"community-operators-2zc87\" (UID: \"7c53c28b-bc39-454a-ad61-1de7109f45ee\") " pod="openshift-marketplace/community-operators-2zc87" Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.400273 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw8jm\" (UniqueName: \"kubernetes.io/projected/7c53c28b-bc39-454a-ad61-1de7109f45ee-kube-api-access-kw8jm\") pod \"community-operators-2zc87\" (UID: \"7c53c28b-bc39-454a-ad61-1de7109f45ee\") " pod="openshift-marketplace/community-operators-2zc87" Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.451435 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:13 crc kubenswrapper[5107]: E1209 14:58:13.451920 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:13.951905346 +0000 UTC m=+141.675610235 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.456829 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.484682 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.484842 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.485904 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-cttpw" Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.486480 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-cttpw" Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.497927 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-xdkgf" Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.497971 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-xdkgf" Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.498140 5107 patch_prober.go:28] interesting pod/console-64d44f6ddf-cttpw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.498309 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-cttpw" podUID="d293f4be-8891-4515-b52d-35a61cddfc12" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.499969 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.500239 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.559762 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:13 crc kubenswrapper[5107]: E1209 14:58:13.560099 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:14.060082139 +0000 UTC m=+141.783787028 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.568536 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-xdkgf" Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.576872 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2zc87" Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.735801 5107 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-94qbq container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": context deadline exceeded" start-of-body= Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.735876 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-94qbq" podUID="de306a6c-37e9-4adb-bd62-44825e0df8c1" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": context deadline exceeded" Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.737196 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d0b0c00-6091-44b7-a8f0-f0cc529e897a-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"8d0b0c00-6091-44b7-a8f0-f0cc529e897a\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.737252 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8d0b0c00-6091-44b7-a8f0-f0cc529e897a-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"8d0b0c00-6091-44b7-a8f0-f0cc529e897a\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.737506 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:13 crc kubenswrapper[5107]: E1209 14:58:13.738984 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:14.238970282 +0000 UTC m=+141.962675171 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.798273 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-7hwmg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:58:13 crc kubenswrapper[5107]: [-]has-synced failed: reason withheld Dec 09 14:58:13 crc kubenswrapper[5107]: [+]process-running ok Dec 09 14:58:13 crc kubenswrapper[5107]: healthz check failed Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.798800 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-7hwmg" podUID="43ec8aae-6bc0-438d-84c5-63ef04ca4db9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.822764 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-vptvb" podUID="0b29ecdf-6004-475e-8bcb-5fffa678a02b" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://bf2904ed53d6c4b936a0beb3243fced515b65705ae67b2c3ded7e531c0087448" gracePeriod=30 Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.837546 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-xdkgf" Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.838806 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:13 crc kubenswrapper[5107]: E1209 14:58:13.839736 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:14.339683162 +0000 UTC m=+142.063388201 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.841118 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:13 crc kubenswrapper[5107]: E1209 14:58:13.842253 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:14.342231771 +0000 UTC m=+142.065936660 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.847660 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d0b0c00-6091-44b7-a8f0-f0cc529e897a-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"8d0b0c00-6091-44b7-a8f0-f0cc529e897a\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.847739 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8d0b0c00-6091-44b7-a8f0-f0cc529e897a-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"8d0b0c00-6091-44b7-a8f0-f0cc529e897a\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.849637 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8d0b0c00-6091-44b7-a8f0-f0cc529e897a-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"8d0b0c00-6091-44b7-a8f0-f0cc529e897a\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.897518 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d0b0c00-6091-44b7-a8f0-f0cc529e897a-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"8d0b0c00-6091-44b7-a8f0-f0cc529e897a\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 09 14:58:13 crc kubenswrapper[5107]: I1209 14:58:13.950999 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:13 crc kubenswrapper[5107]: E1209 14:58:13.951274 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:14.451258377 +0000 UTC m=+142.174963266 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:14 crc kubenswrapper[5107]: I1209 14:58:14.019419 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z7hcq"] Dec 09 14:58:14 crc kubenswrapper[5107]: W1209 14:58:14.046140 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21f1c435_27a8_4463_97da_af76d49f0e7a.slice/crio-b0a0fbacf1c4b72b697f0f49471229eaa298dafdc8cd53808193c561b5884eff WatchSource:0}: Error finding container b0a0fbacf1c4b72b697f0f49471229eaa298dafdc8cd53808193c561b5884eff: Status 404 returned error can't find the container with id b0a0fbacf1c4b72b697f0f49471229eaa298dafdc8cd53808193c561b5884eff Dec 09 14:58:14 crc kubenswrapper[5107]: I1209 14:58:14.067128 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:14 crc kubenswrapper[5107]: E1209 14:58:14.067548 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:14.567536208 +0000 UTC m=+142.291241097 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:14 crc kubenswrapper[5107]: I1209 14:58:14.165595 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 09 14:58:14 crc kubenswrapper[5107]: I1209 14:58:14.170208 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:14 crc kubenswrapper[5107]: E1209 14:58:14.170590 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:14.670574092 +0000 UTC m=+142.394278981 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:14 crc kubenswrapper[5107]: I1209 14:58:14.273744 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:14 crc kubenswrapper[5107]: E1209 14:58:14.274201 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:14.774168931 +0000 UTC m=+142.497873810 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:14 crc kubenswrapper[5107]: I1209 14:58:14.462923 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:14 crc kubenswrapper[5107]: E1209 14:58:14.463923 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:14.963906156 +0000 UTC m=+142.687611045 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:14 crc kubenswrapper[5107]: I1209 14:58:14.538700 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rnpmv"] Dec 09 14:58:14 crc kubenswrapper[5107]: I1209 14:58:14.566614 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:14 crc kubenswrapper[5107]: E1209 14:58:14.567043 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:15.067026852 +0000 UTC m=+142.790731741 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:14 crc kubenswrapper[5107]: I1209 14:58:14.568421 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rnpmv" Dec 09 14:58:14 crc kubenswrapper[5107]: I1209 14:58:14.576738 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rnpmv"] Dec 09 14:58:14 crc kubenswrapper[5107]: I1209 14:58:14.601587 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 09 14:58:14 crc kubenswrapper[5107]: I1209 14:58:14.660493 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vmk4n"] Dec 09 14:58:14 crc kubenswrapper[5107]: I1209 14:58:14.668265 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:14 crc kubenswrapper[5107]: I1209 14:58:14.668568 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzrlf\" (UniqueName: \"kubernetes.io/projected/80fe473c-479a-4083-88ed-ff9ec66558b9-kube-api-access-nzrlf\") pod \"redhat-marketplace-rnpmv\" (UID: \"80fe473c-479a-4083-88ed-ff9ec66558b9\") " pod="openshift-marketplace/redhat-marketplace-rnpmv" Dec 09 14:58:14 crc kubenswrapper[5107]: I1209 14:58:14.668619 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80fe473c-479a-4083-88ed-ff9ec66558b9-utilities\") pod \"redhat-marketplace-rnpmv\" (UID: \"80fe473c-479a-4083-88ed-ff9ec66558b9\") " pod="openshift-marketplace/redhat-marketplace-rnpmv" Dec 09 14:58:14 crc kubenswrapper[5107]: I1209 14:58:14.668672 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80fe473c-479a-4083-88ed-ff9ec66558b9-catalog-content\") pod \"redhat-marketplace-rnpmv\" (UID: \"80fe473c-479a-4083-88ed-ff9ec66558b9\") " pod="openshift-marketplace/redhat-marketplace-rnpmv" Dec 09 14:58:14 crc kubenswrapper[5107]: E1209 14:58:14.668850 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:15.168828333 +0000 UTC m=+142.892533232 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:14 crc kubenswrapper[5107]: I1209 14:58:14.757495 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gwhqf"] Dec 09 14:58:14 crc kubenswrapper[5107]: I1209 14:58:14.775721 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:14 crc kubenswrapper[5107]: I1209 14:58:14.775849 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nzrlf\" (UniqueName: \"kubernetes.io/projected/80fe473c-479a-4083-88ed-ff9ec66558b9-kube-api-access-nzrlf\") pod \"redhat-marketplace-rnpmv\" (UID: \"80fe473c-479a-4083-88ed-ff9ec66558b9\") " pod="openshift-marketplace/redhat-marketplace-rnpmv" Dec 09 14:58:14 crc kubenswrapper[5107]: I1209 14:58:14.775897 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80fe473c-479a-4083-88ed-ff9ec66558b9-utilities\") pod \"redhat-marketplace-rnpmv\" (UID: \"80fe473c-479a-4083-88ed-ff9ec66558b9\") " pod="openshift-marketplace/redhat-marketplace-rnpmv" Dec 09 14:58:14 crc kubenswrapper[5107]: I1209 14:58:14.775939 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80fe473c-479a-4083-88ed-ff9ec66558b9-catalog-content\") pod \"redhat-marketplace-rnpmv\" (UID: \"80fe473c-479a-4083-88ed-ff9ec66558b9\") " pod="openshift-marketplace/redhat-marketplace-rnpmv" Dec 09 14:58:14 crc kubenswrapper[5107]: I1209 14:58:14.776537 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80fe473c-479a-4083-88ed-ff9ec66558b9-catalog-content\") pod \"redhat-marketplace-rnpmv\" (UID: \"80fe473c-479a-4083-88ed-ff9ec66558b9\") " pod="openshift-marketplace/redhat-marketplace-rnpmv" Dec 09 14:58:14 crc kubenswrapper[5107]: E1209 14:58:14.776944 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:15.276918433 +0000 UTC m=+143.000623322 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:14 crc kubenswrapper[5107]: I1209 14:58:14.781019 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80fe473c-479a-4083-88ed-ff9ec66558b9-utilities\") pod \"redhat-marketplace-rnpmv\" (UID: \"80fe473c-479a-4083-88ed-ff9ec66558b9\") " pod="openshift-marketplace/redhat-marketplace-rnpmv" Dec 09 14:58:14 crc kubenswrapper[5107]: I1209 14:58:14.792637 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gwhqf" Dec 09 14:58:14 crc kubenswrapper[5107]: I1209 14:58:14.795943 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lcv6m"] Dec 09 14:58:14 crc kubenswrapper[5107]: I1209 14:58:14.802641 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-7hwmg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:58:14 crc kubenswrapper[5107]: [-]has-synced failed: reason withheld Dec 09 14:58:14 crc kubenswrapper[5107]: [+]process-running ok Dec 09 14:58:14 crc kubenswrapper[5107]: healthz check failed Dec 09 14:58:14 crc kubenswrapper[5107]: I1209 14:58:14.802720 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-7hwmg" podUID="43ec8aae-6bc0-438d-84c5-63ef04ca4db9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:58:14 crc kubenswrapper[5107]: I1209 14:58:14.845279 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzrlf\" (UniqueName: \"kubernetes.io/projected/80fe473c-479a-4083-88ed-ff9ec66558b9-kube-api-access-nzrlf\") pod \"redhat-marketplace-rnpmv\" (UID: \"80fe473c-479a-4083-88ed-ff9ec66558b9\") " pod="openshift-marketplace/redhat-marketplace-rnpmv" Dec 09 14:58:14 crc kubenswrapper[5107]: I1209 14:58:14.879369 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:14 crc kubenswrapper[5107]: I1209 14:58:14.879800 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac4fe46c-6340-4469-947f-e6e295650a97-utilities\") pod \"redhat-marketplace-gwhqf\" (UID: \"ac4fe46c-6340-4469-947f-e6e295650a97\") " pod="openshift-marketplace/redhat-marketplace-gwhqf" Dec 09 14:58:14 crc kubenswrapper[5107]: I1209 14:58:14.879876 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47x6j\" (UniqueName: \"kubernetes.io/projected/ac4fe46c-6340-4469-947f-e6e295650a97-kube-api-access-47x6j\") pod \"redhat-marketplace-gwhqf\" (UID: \"ac4fe46c-6340-4469-947f-e6e295650a97\") " pod="openshift-marketplace/redhat-marketplace-gwhqf" Dec 09 14:58:14 crc kubenswrapper[5107]: I1209 14:58:14.879937 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac4fe46c-6340-4469-947f-e6e295650a97-catalog-content\") pod \"redhat-marketplace-gwhqf\" (UID: \"ac4fe46c-6340-4469-947f-e6e295650a97\") " pod="openshift-marketplace/redhat-marketplace-gwhqf" Dec 09 14:58:14 crc kubenswrapper[5107]: E1209 14:58:14.880062 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:15.380043829 +0000 UTC m=+143.103748718 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:14 crc kubenswrapper[5107]: I1209 14:58:14.900260 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gwhqf"] Dec 09 14:58:14 crc kubenswrapper[5107]: I1209 14:58:14.906937 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rnpmv" Dec 09 14:58:14 crc kubenswrapper[5107]: I1209 14:58:14.945404 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lcv6m" event={"ID":"6a7f2680-23c6-4334-a9dd-c4328ea41821","Type":"ContainerStarted","Data":"8296b2dd02c82c11378b60288705136ad16de937ef3a8a379b2500ebdd072468"} Dec 09 14:58:14 crc kubenswrapper[5107]: I1209 14:58:14.947490 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmk4n" event={"ID":"6243a4ba-2331-4ff6-8d02-0d7cda7bb73e","Type":"ContainerStarted","Data":"b31685e92d5bb1563a869a3a1eca0bce59aa9c493c34400f4eb15b5dce6d8f14"} Dec 09 14:58:14 crc kubenswrapper[5107]: I1209 14:58:14.949646 5107 generic.go:358] "Generic (PLEG): container finished" podID="21f1c435-27a8-4463-97da-af76d49f0e7a" containerID="9698641728ad8bda53e168d371e54d5da3d64f9d61c10eeddf5663f960d4837a" exitCode=0 Dec 09 14:58:14 crc kubenswrapper[5107]: I1209 14:58:14.949881 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7hcq" event={"ID":"21f1c435-27a8-4463-97da-af76d49f0e7a","Type":"ContainerDied","Data":"9698641728ad8bda53e168d371e54d5da3d64f9d61c10eeddf5663f960d4837a"} Dec 09 14:58:14 crc kubenswrapper[5107]: I1209 14:58:14.950036 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7hcq" event={"ID":"21f1c435-27a8-4463-97da-af76d49f0e7a","Type":"ContainerStarted","Data":"b0a0fbacf1c4b72b697f0f49471229eaa298dafdc8cd53808193c561b5884eff"} Dec 09 14:58:14 crc kubenswrapper[5107]: I1209 14:58:14.985077 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac4fe46c-6340-4469-947f-e6e295650a97-utilities\") pod \"redhat-marketplace-gwhqf\" (UID: \"ac4fe46c-6340-4469-947f-e6e295650a97\") " pod="openshift-marketplace/redhat-marketplace-gwhqf" Dec 09 14:58:14 crc kubenswrapper[5107]: I1209 14:58:14.985427 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-47x6j\" (UniqueName: \"kubernetes.io/projected/ac4fe46c-6340-4469-947f-e6e295650a97-kube-api-access-47x6j\") pod \"redhat-marketplace-gwhqf\" (UID: \"ac4fe46c-6340-4469-947f-e6e295650a97\") " pod="openshift-marketplace/redhat-marketplace-gwhqf" Dec 09 14:58:14 crc kubenswrapper[5107]: I1209 14:58:14.985472 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:14 crc kubenswrapper[5107]: I1209 14:58:14.985496 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac4fe46c-6340-4469-947f-e6e295650a97-catalog-content\") pod \"redhat-marketplace-gwhqf\" (UID: \"ac4fe46c-6340-4469-947f-e6e295650a97\") " pod="openshift-marketplace/redhat-marketplace-gwhqf" Dec 09 14:58:14 crc kubenswrapper[5107]: I1209 14:58:14.985933 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac4fe46c-6340-4469-947f-e6e295650a97-catalog-content\") pod \"redhat-marketplace-gwhqf\" (UID: \"ac4fe46c-6340-4469-947f-e6e295650a97\") " pod="openshift-marketplace/redhat-marketplace-gwhqf" Dec 09 14:58:14 crc kubenswrapper[5107]: I1209 14:58:14.986140 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac4fe46c-6340-4469-947f-e6e295650a97-utilities\") pod \"redhat-marketplace-gwhqf\" (UID: \"ac4fe46c-6340-4469-947f-e6e295650a97\") " pod="openshift-marketplace/redhat-marketplace-gwhqf" Dec 09 14:58:14 crc kubenswrapper[5107]: E1209 14:58:14.986647 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:15.486634839 +0000 UTC m=+143.210339728 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:15 crc kubenswrapper[5107]: I1209 14:58:15.045314 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-47x6j\" (UniqueName: \"kubernetes.io/projected/ac4fe46c-6340-4469-947f-e6e295650a97-kube-api-access-47x6j\") pod \"redhat-marketplace-gwhqf\" (UID: \"ac4fe46c-6340-4469-947f-e6e295650a97\") " pod="openshift-marketplace/redhat-marketplace-gwhqf" Dec 09 14:58:15 crc kubenswrapper[5107]: I1209 14:58:15.100929 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:15 crc kubenswrapper[5107]: E1209 14:58:15.102706 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:15.602683264 +0000 UTC m=+143.326388153 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:15 crc kubenswrapper[5107]: I1209 14:58:15.190975 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2zc87"] Dec 09 14:58:15 crc kubenswrapper[5107]: I1209 14:58:15.203696 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:15 crc kubenswrapper[5107]: E1209 14:58:15.204150 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:15.704133414 +0000 UTC m=+143.427838303 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:15 crc kubenswrapper[5107]: I1209 14:58:15.214737 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gwhqf" Dec 09 14:58:15 crc kubenswrapper[5107]: I1209 14:58:15.286868 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421525-2b9xz" Dec 09 14:58:15 crc kubenswrapper[5107]: I1209 14:58:15.304579 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 09 14:58:15 crc kubenswrapper[5107]: I1209 14:58:15.318012 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:15 crc kubenswrapper[5107]: E1209 14:58:15.318503 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:15.818474344 +0000 UTC m=+143.542179233 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:15 crc kubenswrapper[5107]: I1209 14:58:15.433893 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de384ce1-a016-4108-bb34-bf9475a09c66-config-volume\") pod \"de384ce1-a016-4108-bb34-bf9475a09c66\" (UID: \"de384ce1-a016-4108-bb34-bf9475a09c66\") " Dec 09 14:58:15 crc kubenswrapper[5107]: I1209 14:58:15.434038 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qj9nj\" (UniqueName: \"kubernetes.io/projected/de384ce1-a016-4108-bb34-bf9475a09c66-kube-api-access-qj9nj\") pod \"de384ce1-a016-4108-bb34-bf9475a09c66\" (UID: \"de384ce1-a016-4108-bb34-bf9475a09c66\") " Dec 09 14:58:15 crc kubenswrapper[5107]: I1209 14:58:15.434435 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/de384ce1-a016-4108-bb34-bf9475a09c66-secret-volume\") pod \"de384ce1-a016-4108-bb34-bf9475a09c66\" (UID: \"de384ce1-a016-4108-bb34-bf9475a09c66\") " Dec 09 14:58:15 crc kubenswrapper[5107]: I1209 14:58:15.434678 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:15 crc kubenswrapper[5107]: E1209 14:58:15.435115 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:15.935097324 +0000 UTC m=+143.658802213 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:15 crc kubenswrapper[5107]: I1209 14:58:15.436069 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 09 14:58:15 crc kubenswrapper[5107]: I1209 14:58:15.437091 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="de384ce1-a016-4108-bb34-bf9475a09c66" containerName="collect-profiles" Dec 09 14:58:15 crc kubenswrapper[5107]: I1209 14:58:15.437104 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="de384ce1-a016-4108-bb34-bf9475a09c66" containerName="collect-profiles" Dec 09 14:58:15 crc kubenswrapper[5107]: I1209 14:58:15.437236 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="de384ce1-a016-4108-bb34-bf9475a09c66" containerName="collect-profiles" Dec 09 14:58:15 crc kubenswrapper[5107]: I1209 14:58:15.441542 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de384ce1-a016-4108-bb34-bf9475a09c66-config-volume" (OuterVolumeSpecName: "config-volume") pod "de384ce1-a016-4108-bb34-bf9475a09c66" (UID: "de384ce1-a016-4108-bb34-bf9475a09c66"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:58:15 crc kubenswrapper[5107]: I1209 14:58:15.493732 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de384ce1-a016-4108-bb34-bf9475a09c66-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "de384ce1-a016-4108-bb34-bf9475a09c66" (UID: "de384ce1-a016-4108-bb34-bf9475a09c66"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:58:15 crc kubenswrapper[5107]: I1209 14:58:15.494398 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de384ce1-a016-4108-bb34-bf9475a09c66-kube-api-access-qj9nj" (OuterVolumeSpecName: "kube-api-access-qj9nj") pod "de384ce1-a016-4108-bb34-bf9475a09c66" (UID: "de384ce1-a016-4108-bb34-bf9475a09c66"). InnerVolumeSpecName "kube-api-access-qj9nj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:58:15 crc kubenswrapper[5107]: I1209 14:58:15.543251 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:15 crc kubenswrapper[5107]: I1209 14:58:15.543804 5107 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de384ce1-a016-4108-bb34-bf9475a09c66-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 14:58:15 crc kubenswrapper[5107]: I1209 14:58:15.543826 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qj9nj\" (UniqueName: \"kubernetes.io/projected/de384ce1-a016-4108-bb34-bf9475a09c66-kube-api-access-qj9nj\") on node \"crc\" DevicePath \"\"" Dec 09 14:58:15 crc kubenswrapper[5107]: I1209 14:58:15.543840 5107 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/de384ce1-a016-4108-bb34-bf9475a09c66-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 14:58:15 crc kubenswrapper[5107]: E1209 14:58:15.543932 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:16.043906304 +0000 UTC m=+143.767611193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:15 crc kubenswrapper[5107]: I1209 14:58:15.645707 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:15 crc kubenswrapper[5107]: E1209 14:58:15.646268 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:16.146250809 +0000 UTC m=+143.869955698 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:15 crc kubenswrapper[5107]: I1209 14:58:15.746687 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:15 crc kubenswrapper[5107]: E1209 14:58:15.746933 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:16.246892768 +0000 UTC m=+143.970597657 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:15 crc kubenswrapper[5107]: I1209 14:58:15.747227 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:15 crc kubenswrapper[5107]: E1209 14:58:15.747870 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:16.247862644 +0000 UTC m=+143.971567533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:15 crc kubenswrapper[5107]: I1209 14:58:15.789151 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-7hwmg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:58:15 crc kubenswrapper[5107]: [-]has-synced failed: reason withheld Dec 09 14:58:15 crc kubenswrapper[5107]: [+]process-running ok Dec 09 14:58:15 crc kubenswrapper[5107]: healthz check failed Dec 09 14:58:15 crc kubenswrapper[5107]: I1209 14:58:15.789279 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-7hwmg" podUID="43ec8aae-6bc0-438d-84c5-63ef04ca4db9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:58:15 crc kubenswrapper[5107]: I1209 14:58:15.848876 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:15 crc kubenswrapper[5107]: E1209 14:58:15.849099 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:16.349056598 +0000 UTC m=+144.072761487 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:15 crc kubenswrapper[5107]: I1209 14:58:15.849366 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:15 crc kubenswrapper[5107]: E1209 14:58:15.849740 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:16.349722786 +0000 UTC m=+144.073427675 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:15 crc kubenswrapper[5107]: I1209 14:58:15.951354 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:15 crc kubenswrapper[5107]: E1209 14:58:15.951982 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:16.451911277 +0000 UTC m=+144.175616176 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:15 crc kubenswrapper[5107]: I1209 14:58:15.952971 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:15 crc kubenswrapper[5107]: E1209 14:58:15.953479 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:16.453464388 +0000 UTC m=+144.177169277 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:16 crc kubenswrapper[5107]: I1209 14:58:16.054690 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:16 crc kubenswrapper[5107]: E1209 14:58:16.054953 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:16.554907969 +0000 UTC m=+144.278612858 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:16 crc kubenswrapper[5107]: I1209 14:58:16.055421 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:16 crc kubenswrapper[5107]: E1209 14:58:16.056068 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:16.55605593 +0000 UTC m=+144.279760819 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:16 crc kubenswrapper[5107]: I1209 14:58:16.156916 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:16 crc kubenswrapper[5107]: E1209 14:58:16.157386 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:16.657327386 +0000 UTC m=+144.381032275 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:16 crc kubenswrapper[5107]: I1209 14:58:16.259440 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:16 crc kubenswrapper[5107]: E1209 14:58:16.259859 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:16.759834835 +0000 UTC m=+144.483539724 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:16 crc kubenswrapper[5107]: I1209 14:58:16.321751 5107 patch_prober.go:28] interesting pod/downloads-747b44746d-bg27m container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 09 14:58:16 crc kubenswrapper[5107]: I1209 14:58:16.321855 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-bg27m" podUID="b47c5069-df03-4bb4-9b81-2213e9d95183" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 09 14:58:16 crc kubenswrapper[5107]: I1209 14:58:16.361328 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:16 crc kubenswrapper[5107]: E1209 14:58:16.361609 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:16.861569334 +0000 UTC m=+144.585274223 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:16 crc kubenswrapper[5107]: I1209 14:58:16.361914 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:16 crc kubenswrapper[5107]: E1209 14:58:16.362296 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:16.862280993 +0000 UTC m=+144.585985882 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:16 crc kubenswrapper[5107]: I1209 14:58:16.463802 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:16 crc kubenswrapper[5107]: E1209 14:58:16.464588 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:16.964558636 +0000 UTC m=+144.688263535 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:16 crc kubenswrapper[5107]: I1209 14:58:16.465347 5107 ???:1] "http: TLS handshake error from 192.168.126.11:44174: no serving certificate available for the kubelet" Dec 09 14:58:16 crc kubenswrapper[5107]: I1209 14:58:16.565964 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:16 crc kubenswrapper[5107]: E1209 14:58:16.566609 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:17.066591283 +0000 UTC m=+144.790296172 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:16 crc kubenswrapper[5107]: I1209 14:58:16.668052 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:16 crc kubenswrapper[5107]: E1209 14:58:16.668434 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:17.168409094 +0000 UTC m=+144.892113983 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:16 crc kubenswrapper[5107]: I1209 14:58:16.769754 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:16 crc kubenswrapper[5107]: E1209 14:58:16.770239 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:17.270221514 +0000 UTC m=+144.993926403 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:16 crc kubenswrapper[5107]: I1209 14:58:16.799049 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-7hwmg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:58:16 crc kubenswrapper[5107]: [-]has-synced failed: reason withheld Dec 09 14:58:16 crc kubenswrapper[5107]: [+]process-running ok Dec 09 14:58:16 crc kubenswrapper[5107]: healthz check failed Dec 09 14:58:16 crc kubenswrapper[5107]: I1209 14:58:16.799162 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-7hwmg" podUID="43ec8aae-6bc0-438d-84c5-63ef04ca4db9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:58:16 crc kubenswrapper[5107]: I1209 14:58:16.833263 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 09 14:58:16 crc kubenswrapper[5107]: I1209 14:58:16.833309 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jx4fv"] Dec 09 14:58:16 crc kubenswrapper[5107]: I1209 14:58:16.833537 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421525-2b9xz" Dec 09 14:58:16 crc kubenswrapper[5107]: I1209 14:58:16.835300 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 09 14:58:16 crc kubenswrapper[5107]: I1209 14:58:16.843080 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 09 14:58:16 crc kubenswrapper[5107]: I1209 14:58:16.843246 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 09 14:58:16 crc kubenswrapper[5107]: I1209 14:58:16.871825 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:16 crc kubenswrapper[5107]: I1209 14:58:16.872034 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4d03c67-d11d-4a22-aa4e-10cc47dddbef-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"f4d03c67-d11d-4a22-aa4e-10cc47dddbef\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 09 14:58:16 crc kubenswrapper[5107]: I1209 14:58:16.872162 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f4d03c67-d11d-4a22-aa4e-10cc47dddbef-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"f4d03c67-d11d-4a22-aa4e-10cc47dddbef\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 09 14:58:16 crc kubenswrapper[5107]: E1209 14:58:16.872361 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:17.372308392 +0000 UTC m=+145.096013281 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:16 crc kubenswrapper[5107]: I1209 14:58:16.877516 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-7hwmg" Dec 09 14:58:16 crc kubenswrapper[5107]: I1209 14:58:16.879183 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jx4fv" Dec 09 14:58:16 crc kubenswrapper[5107]: I1209 14:58:16.887702 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 09 14:58:16 crc kubenswrapper[5107]: I1209 14:58:16.978661 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wmxq\" (UniqueName: \"kubernetes.io/projected/b45380af-d55c-4f77-9385-8218e990c675-kube-api-access-2wmxq\") pod \"redhat-operators-jx4fv\" (UID: \"b45380af-d55c-4f77-9385-8218e990c675\") " pod="openshift-marketplace/redhat-operators-jx4fv" Dec 09 14:58:16 crc kubenswrapper[5107]: I1209 14:58:16.978708 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b45380af-d55c-4f77-9385-8218e990c675-utilities\") pod \"redhat-operators-jx4fv\" (UID: \"b45380af-d55c-4f77-9385-8218e990c675\") " pod="openshift-marketplace/redhat-operators-jx4fv" Dec 09 14:58:16 crc kubenswrapper[5107]: I1209 14:58:16.978769 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:16 crc kubenswrapper[5107]: I1209 14:58:16.978817 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f4d03c67-d11d-4a22-aa4e-10cc47dddbef-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"f4d03c67-d11d-4a22-aa4e-10cc47dddbef\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 09 14:58:16 crc kubenswrapper[5107]: I1209 14:58:16.978864 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b45380af-d55c-4f77-9385-8218e990c675-catalog-content\") pod \"redhat-operators-jx4fv\" (UID: \"b45380af-d55c-4f77-9385-8218e990c675\") " pod="openshift-marketplace/redhat-operators-jx4fv" Dec 09 14:58:16 crc kubenswrapper[5107]: I1209 14:58:16.978946 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4d03c67-d11d-4a22-aa4e-10cc47dddbef-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"f4d03c67-d11d-4a22-aa4e-10cc47dddbef\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 09 14:58:16 crc kubenswrapper[5107]: I1209 14:58:16.979633 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f4d03c67-d11d-4a22-aa4e-10cc47dddbef-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"f4d03c67-d11d-4a22-aa4e-10cc47dddbef\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 09 14:58:16 crc kubenswrapper[5107]: E1209 14:58:16.979801 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:17.479774605 +0000 UTC m=+145.203479494 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.030144 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4d03c67-d11d-4a22-aa4e-10cc47dddbef-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"f4d03c67-d11d-4a22-aa4e-10cc47dddbef\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.051890 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnpmv" event={"ID":"80fe473c-479a-4083-88ed-ff9ec66558b9","Type":"ContainerStarted","Data":"6ea8c7806dfeb14762af39a00d1846b08ff086a2af9e0135cbf1f0250bc003e4"} Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.051945 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"8d0b0c00-6091-44b7-a8f0-f0cc529e897a","Type":"ContainerStarted","Data":"8003b926c5c4b91d405c15b90dcd8bc7c07f613e7cbc43af5ff2d9a8198c0614"} Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.051961 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2zc87" event={"ID":"7c53c28b-bc39-454a-ad61-1de7109f45ee","Type":"ContainerStarted","Data":"e6b0d7e0929c72c9a6cc1a1c25bc72960d514fa9dea6b0804553e886d4156dea"} Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.051972 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421525-2b9xz" event={"ID":"de384ce1-a016-4108-bb34-bf9475a09c66","Type":"ContainerDied","Data":"8f0ccd9da6e75e534f8742765e0c9370acb53a3fa48c5497d533ad486a13b5e3"} Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.051988 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f0ccd9da6e75e534f8742765e0c9370acb53a3fa48c5497d533ad486a13b5e3" Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.052029 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jx4fv"] Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.052048 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rnpmv"] Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.052060 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cg8hp"] Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.080606 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:17 crc kubenswrapper[5107]: E1209 14:58:17.081736 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:17.581712009 +0000 UTC m=+145.305416898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.081819 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b45380af-d55c-4f77-9385-8218e990c675-catalog-content\") pod \"redhat-operators-jx4fv\" (UID: \"b45380af-d55c-4f77-9385-8218e990c675\") " pod="openshift-marketplace/redhat-operators-jx4fv" Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.081966 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2wmxq\" (UniqueName: \"kubernetes.io/projected/b45380af-d55c-4f77-9385-8218e990c675-kube-api-access-2wmxq\") pod \"redhat-operators-jx4fv\" (UID: \"b45380af-d55c-4f77-9385-8218e990c675\") " pod="openshift-marketplace/redhat-operators-jx4fv" Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.082002 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b45380af-d55c-4f77-9385-8218e990c675-utilities\") pod \"redhat-operators-jx4fv\" (UID: \"b45380af-d55c-4f77-9385-8218e990c675\") " pod="openshift-marketplace/redhat-operators-jx4fv" Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.082114 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:17 crc kubenswrapper[5107]: E1209 14:58:17.082619 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:17.582598524 +0000 UTC m=+145.306303433 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.083841 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b45380af-d55c-4f77-9385-8218e990c675-utilities\") pod \"redhat-operators-jx4fv\" (UID: \"b45380af-d55c-4f77-9385-8218e990c675\") " pod="openshift-marketplace/redhat-operators-jx4fv" Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.084392 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b45380af-d55c-4f77-9385-8218e990c675-catalog-content\") pod \"redhat-operators-jx4fv\" (UID: \"b45380af-d55c-4f77-9385-8218e990c675\") " pod="openshift-marketplace/redhat-operators-jx4fv" Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.134440 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wmxq\" (UniqueName: \"kubernetes.io/projected/b45380af-d55c-4f77-9385-8218e990c675-kube-api-access-2wmxq\") pod \"redhat-operators-jx4fv\" (UID: \"b45380af-d55c-4f77-9385-8218e990c675\") " pod="openshift-marketplace/redhat-operators-jx4fv" Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.183296 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:17 crc kubenswrapper[5107]: E1209 14:58:17.183618 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:17.683588912 +0000 UTC m=+145.407293811 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.184067 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:17 crc kubenswrapper[5107]: E1209 14:58:17.184472 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:17.684453715 +0000 UTC m=+145.408158604 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.203475 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.217634 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jx4fv" Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.285995 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:17 crc kubenswrapper[5107]: E1209 14:58:17.290876 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:17.790843779 +0000 UTC m=+145.514548668 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.391862 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:17 crc kubenswrapper[5107]: E1209 14:58:17.392261 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:17.892248879 +0000 UTC m=+145.615953768 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.545390 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:17 crc kubenswrapper[5107]: E1209 14:58:17.545919 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:18.045894939 +0000 UTC m=+145.769599818 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.655694 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:17 crc kubenswrapper[5107]: E1209 14:58:17.656432 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:18.156419035 +0000 UTC m=+145.880123924 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.759265 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.759458 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:58:17 crc kubenswrapper[5107]: E1209 14:58:17.759529 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:18.2594803 +0000 UTC m=+145.983185329 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.759797 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.760149 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.760212 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.760279 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.765069 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:58:17 crc kubenswrapper[5107]: E1209 14:58:17.766955 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:18.266934451 +0000 UTC m=+145.990639550 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.769837 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.772679 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.774389 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.826756 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-7hwmg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:58:17 crc kubenswrapper[5107]: [+]has-synced ok Dec 09 14:58:17 crc kubenswrapper[5107]: [+]process-running ok Dec 09 14:58:17 crc kubenswrapper[5107]: healthz check failed Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.826831 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-7hwmg" podUID="43ec8aae-6bc0-438d-84c5-63ef04ca4db9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.842139 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.851992 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.856713 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.861597 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:17 crc kubenswrapper[5107]: E1209 14:58:17.861834 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:18.361801064 +0000 UTC m=+146.085505953 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.862072 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f154303d-e14b-4854-8f94-194d0f338f98-metrics-certs\") pod \"network-metrics-daemon-6xk48\" (UID: \"f154303d-e14b-4854-8f94-194d0f338f98\") " pod="openshift-multus/network-metrics-daemon-6xk48" Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.862159 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:17 crc kubenswrapper[5107]: E1209 14:58:17.862651 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:18.362643587 +0000 UTC m=+146.086348476 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.878715 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f154303d-e14b-4854-8f94-194d0f338f98-metrics-certs\") pod \"network-metrics-daemon-6xk48\" (UID: \"f154303d-e14b-4854-8f94-194d0f338f98\") " pod="openshift-multus/network-metrics-daemon-6xk48" Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.904904 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cg8hp"] Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.905360 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cg8hp" Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.906420 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lcv6m" event={"ID":"6a7f2680-23c6-4334-a9dd-c4328ea41821","Type":"ContainerStarted","Data":"c4b050dbd790417068933eaa7294f41b980d2d6c3d41bbabb069e6ec2947121e"} Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.906459 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gwhqf"] Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.906520 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.906536 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmk4n" event={"ID":"6243a4ba-2331-4ff6-8d02-0d7cda7bb73e","Type":"ContainerStarted","Data":"d9cae4e4fd9600c5b3de074e6c9ed71d51a0fe2fd77ef014627f4f500e515148"} Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.906550 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gwhqf" event={"ID":"ac4fe46c-6340-4469-947f-e6e295650a97","Type":"ContainerStarted","Data":"93e2e8ccc89a9a375537d0f858f7341cae0d5b14002349644921126c47e13d4c"} Dec 09 14:58:17 crc kubenswrapper[5107]: E1209 14:58:17.965803 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:18.465754542 +0000 UTC m=+146.189459441 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.966875 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.967200 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.967278 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54lrz\" (UniqueName: \"kubernetes.io/projected/9ae57eec-5514-4d0f-8d41-d78aceca7255-kube-api-access-54lrz\") pod \"redhat-operators-cg8hp\" (UID: \"9ae57eec-5514-4d0f-8d41-d78aceca7255\") " pod="openshift-marketplace/redhat-operators-cg8hp" Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.967310 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ae57eec-5514-4d0f-8d41-d78aceca7255-utilities\") pod \"redhat-operators-cg8hp\" (UID: \"9ae57eec-5514-4d0f-8d41-d78aceca7255\") " pod="openshift-marketplace/redhat-operators-cg8hp" Dec 09 14:58:17 crc kubenswrapper[5107]: I1209 14:58:17.967415 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ae57eec-5514-4d0f-8d41-d78aceca7255-catalog-content\") pod \"redhat-operators-cg8hp\" (UID: \"9ae57eec-5514-4d0f-8d41-d78aceca7255\") " pod="openshift-marketplace/redhat-operators-cg8hp" Dec 09 14:58:17 crc kubenswrapper[5107]: E1209 14:58:17.967746 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:18.467724436 +0000 UTC m=+146.191429325 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:18 crc kubenswrapper[5107]: I1209 14:58:18.003448 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jx4fv"] Dec 09 14:58:18 crc kubenswrapper[5107]: I1209 14:58:18.068552 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:18 crc kubenswrapper[5107]: I1209 14:58:18.068844 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ae57eec-5514-4d0f-8d41-d78aceca7255-catalog-content\") pod \"redhat-operators-cg8hp\" (UID: \"9ae57eec-5514-4d0f-8d41-d78aceca7255\") " pod="openshift-marketplace/redhat-operators-cg8hp" Dec 09 14:58:18 crc kubenswrapper[5107]: I1209 14:58:18.068925 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-54lrz\" (UniqueName: \"kubernetes.io/projected/9ae57eec-5514-4d0f-8d41-d78aceca7255-kube-api-access-54lrz\") pod \"redhat-operators-cg8hp\" (UID: \"9ae57eec-5514-4d0f-8d41-d78aceca7255\") " pod="openshift-marketplace/redhat-operators-cg8hp" Dec 09 14:58:18 crc kubenswrapper[5107]: I1209 14:58:18.068948 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ae57eec-5514-4d0f-8d41-d78aceca7255-utilities\") pod \"redhat-operators-cg8hp\" (UID: \"9ae57eec-5514-4d0f-8d41-d78aceca7255\") " pod="openshift-marketplace/redhat-operators-cg8hp" Dec 09 14:58:18 crc kubenswrapper[5107]: I1209 14:58:18.069526 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ae57eec-5514-4d0f-8d41-d78aceca7255-utilities\") pod \"redhat-operators-cg8hp\" (UID: \"9ae57eec-5514-4d0f-8d41-d78aceca7255\") " pod="openshift-marketplace/redhat-operators-cg8hp" Dec 09 14:58:18 crc kubenswrapper[5107]: E1209 14:58:18.069620 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:18.569596738 +0000 UTC m=+146.293301627 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:18 crc kubenswrapper[5107]: I1209 14:58:18.069836 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ae57eec-5514-4d0f-8d41-d78aceca7255-catalog-content\") pod \"redhat-operators-cg8hp\" (UID: \"9ae57eec-5514-4d0f-8d41-d78aceca7255\") " pod="openshift-marketplace/redhat-operators-cg8hp" Dec 09 14:58:18 crc kubenswrapper[5107]: I1209 14:58:18.133138 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6xk48" Dec 09 14:58:18 crc kubenswrapper[5107]: I1209 14:58:18.170497 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:18 crc kubenswrapper[5107]: E1209 14:58:18.171133 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:18.67110893 +0000 UTC m=+146.394813819 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:18 crc kubenswrapper[5107]: I1209 14:58:18.259957 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-54lrz\" (UniqueName: \"kubernetes.io/projected/9ae57eec-5514-4d0f-8d41-d78aceca7255-kube-api-access-54lrz\") pod \"redhat-operators-cg8hp\" (UID: \"9ae57eec-5514-4d0f-8d41-d78aceca7255\") " pod="openshift-marketplace/redhat-operators-cg8hp" Dec 09 14:58:18 crc kubenswrapper[5107]: I1209 14:58:18.277355 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:18 crc kubenswrapper[5107]: E1209 14:58:18.277708 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:18.777679829 +0000 UTC m=+146.501384868 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:18 crc kubenswrapper[5107]: I1209 14:58:18.278191 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:18 crc kubenswrapper[5107]: E1209 14:58:18.278838 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:18.77881104 +0000 UTC m=+146.502516049 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:18 crc kubenswrapper[5107]: I1209 14:58:18.297272 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnpmv" event={"ID":"80fe473c-479a-4083-88ed-ff9ec66558b9","Type":"ContainerStarted","Data":"28d29735f4a80fc49b5aef4aec61efb3944a5a40819fe4930cad87eede77db14"} Dec 09 14:58:18 crc kubenswrapper[5107]: I1209 14:58:18.348784 5107 generic.go:358] "Generic (PLEG): container finished" podID="6243a4ba-2331-4ff6-8d02-0d7cda7bb73e" containerID="d9cae4e4fd9600c5b3de074e6c9ed71d51a0fe2fd77ef014627f4f500e515148" exitCode=0 Dec 09 14:58:18 crc kubenswrapper[5107]: I1209 14:58:18.349271 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmk4n" event={"ID":"6243a4ba-2331-4ff6-8d02-0d7cda7bb73e","Type":"ContainerDied","Data":"d9cae4e4fd9600c5b3de074e6c9ed71d51a0fe2fd77ef014627f4f500e515148"} Dec 09 14:58:18 crc kubenswrapper[5107]: I1209 14:58:18.422204 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:18 crc kubenswrapper[5107]: E1209 14:58:18.425914 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:18.925888104 +0000 UTC m=+146.649592993 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:18 crc kubenswrapper[5107]: I1209 14:58:18.475695 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"8d0b0c00-6091-44b7-a8f0-f0cc529e897a","Type":"ContainerStarted","Data":"b700365c54e374d0acfc987c428536653008f3d8b66eb99544c28da5841dca7a"} Dec 09 14:58:18 crc kubenswrapper[5107]: I1209 14:58:18.500411 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2zc87" event={"ID":"7c53c28b-bc39-454a-ad61-1de7109f45ee","Type":"ContainerStarted","Data":"5c97f87909aa19b7edaa86df93b295303e2f01a8585a3310bde0d3316e522043"} Dec 09 14:58:18 crc kubenswrapper[5107]: I1209 14:58:18.527975 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:18 crc kubenswrapper[5107]: E1209 14:58:18.530166 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:19.03014016 +0000 UTC m=+146.753845209 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:18 crc kubenswrapper[5107]: I1209 14:58:18.530805 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"f4d03c67-d11d-4a22-aa4e-10cc47dddbef","Type":"ContainerStarted","Data":"e1753e49c606a184a806b205e271162512b1d8e40226ca8a1af1bb8af082a9b8"} Dec 09 14:58:18 crc kubenswrapper[5107]: I1209 14:58:18.540701 5107 generic.go:358] "Generic (PLEG): container finished" podID="6a7f2680-23c6-4334-a9dd-c4328ea41821" containerID="c4b050dbd790417068933eaa7294f41b980d2d6c3d41bbabb069e6ec2947121e" exitCode=0 Dec 09 14:58:18 crc kubenswrapper[5107]: I1209 14:58:18.540825 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lcv6m" event={"ID":"6a7f2680-23c6-4334-a9dd-c4328ea41821","Type":"ContainerDied","Data":"c4b050dbd790417068933eaa7294f41b980d2d6c3d41bbabb069e6ec2947121e"} Dec 09 14:58:18 crc kubenswrapper[5107]: I1209 14:58:18.547917 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cg8hp" Dec 09 14:58:18 crc kubenswrapper[5107]: I1209 14:58:18.629568 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:18 crc kubenswrapper[5107]: E1209 14:58:18.630623 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:19.130573753 +0000 UTC m=+146.854278642 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:18 crc kubenswrapper[5107]: I1209 14:58:18.732883 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:18 crc kubenswrapper[5107]: E1209 14:58:18.733493 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:19.233470254 +0000 UTC m=+146.957175143 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:18 crc kubenswrapper[5107]: I1209 14:58:18.803452 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-6xk48"] Dec 09 14:58:18 crc kubenswrapper[5107]: I1209 14:58:18.806420 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-7hwmg" Dec 09 14:58:18 crc kubenswrapper[5107]: I1209 14:58:18.835542 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:18 crc kubenswrapper[5107]: E1209 14:58:18.836053 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:19.336023824 +0000 UTC m=+147.059728713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:18 crc kubenswrapper[5107]: W1209 14:58:18.922576 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17b87002_b798_480a_8e17_83053d698239.slice/crio-8edf6b7573f284bd07418033d9c76d80f4efa98f4abdaf9af26fe86fc4424a40 WatchSource:0}: Error finding container 8edf6b7573f284bd07418033d9c76d80f4efa98f4abdaf9af26fe86fc4424a40: Status 404 returned error can't find the container with id 8edf6b7573f284bd07418033d9c76d80f4efa98f4abdaf9af26fe86fc4424a40 Dec 09 14:58:18 crc kubenswrapper[5107]: I1209 14:58:18.937659 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:18 crc kubenswrapper[5107]: E1209 14:58:18.938215 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:19.438202955 +0000 UTC m=+147.161907834 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:19 crc kubenswrapper[5107]: I1209 14:58:19.016059 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cg8hp"] Dec 09 14:58:19 crc kubenswrapper[5107]: I1209 14:58:19.052613 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:19 crc kubenswrapper[5107]: E1209 14:58:19.052836 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:19.55278525 +0000 UTC m=+147.276490149 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:19 crc kubenswrapper[5107]: I1209 14:58:19.056049 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-7hwmg" Dec 09 14:58:19 crc kubenswrapper[5107]: I1209 14:58:19.157897 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:19 crc kubenswrapper[5107]: E1209 14:58:19.158289 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:19.65827587 +0000 UTC m=+147.381980759 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:19 crc kubenswrapper[5107]: I1209 14:58:19.260177 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:19 crc kubenswrapper[5107]: E1209 14:58:19.260367 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:19.760322147 +0000 UTC m=+147.484027036 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:19 crc kubenswrapper[5107]: I1209 14:58:19.261114 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:19 crc kubenswrapper[5107]: E1209 14:58:19.261633 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:19.761615002 +0000 UTC m=+147.485319891 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:19 crc kubenswrapper[5107]: I1209 14:58:19.363067 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:19 crc kubenswrapper[5107]: E1209 14:58:19.363724 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:19.86369962 +0000 UTC m=+147.587404509 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:19 crc kubenswrapper[5107]: I1209 14:58:19.465448 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:19 crc kubenswrapper[5107]: E1209 14:58:19.466126 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:19.966101456 +0000 UTC m=+147.689806515 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:19 crc kubenswrapper[5107]: I1209 14:58:19.554026 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"8edf6b7573f284bd07418033d9c76d80f4efa98f4abdaf9af26fe86fc4424a40"} Dec 09 14:58:19 crc kubenswrapper[5107]: I1209 14:58:19.558511 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"0a7d93533fcc3199e035b131dea0065ecfe76751ce70a83c7b1c10e808467984"} Dec 09 14:58:19 crc kubenswrapper[5107]: I1209 14:58:19.564669 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"f18cdfeef9588789f820f72dad2659fb5b18d73fda7b5a0459a90d8cc4fd7a52"} Dec 09 14:58:19 crc kubenswrapper[5107]: I1209 14:58:19.566784 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:19 crc kubenswrapper[5107]: E1209 14:58:19.742820 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:20.242790042 +0000 UTC m=+147.966494931 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:19 crc kubenswrapper[5107]: I1209 14:58:19.747257 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:19 crc kubenswrapper[5107]: I1209 14:58:19.747822 5107 generic.go:358] "Generic (PLEG): container finished" podID="80fe473c-479a-4083-88ed-ff9ec66558b9" containerID="28d29735f4a80fc49b5aef4aec61efb3944a5a40819fe4930cad87eede77db14" exitCode=0 Dec 09 14:58:19 crc kubenswrapper[5107]: I1209 14:58:19.748047 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnpmv" event={"ID":"80fe473c-479a-4083-88ed-ff9ec66558b9","Type":"ContainerDied","Data":"28d29735f4a80fc49b5aef4aec61efb3944a5a40819fe4930cad87eede77db14"} Dec 09 14:58:19 crc kubenswrapper[5107]: E1209 14:58:19.748137 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:20.248108905 +0000 UTC m=+147.971813794 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:19 crc kubenswrapper[5107]: I1209 14:58:19.807805 5107 generic.go:358] "Generic (PLEG): container finished" podID="7c53c28b-bc39-454a-ad61-1de7109f45ee" containerID="5c97f87909aa19b7edaa86df93b295303e2f01a8585a3310bde0d3316e522043" exitCode=0 Dec 09 14:58:19 crc kubenswrapper[5107]: I1209 14:58:19.808109 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2zc87" event={"ID":"7c53c28b-bc39-454a-ad61-1de7109f45ee","Type":"ContainerDied","Data":"5c97f87909aa19b7edaa86df93b295303e2f01a8585a3310bde0d3316e522043"} Dec 09 14:58:19 crc kubenswrapper[5107]: I1209 14:58:19.823141 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-6xk48" event={"ID":"f154303d-e14b-4854-8f94-194d0f338f98","Type":"ContainerStarted","Data":"73771770ad05488fb0a668affa708cee2d9629e3190b8a30f7f31d7e82d911ca"} Dec 09 14:58:19 crc kubenswrapper[5107]: I1209 14:58:19.828476 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cg8hp" event={"ID":"9ae57eec-5514-4d0f-8d41-d78aceca7255","Type":"ContainerStarted","Data":"fe5c93b6ab865bc6f036dd7d332cf77ed2f90d68bc143da64eede4cada49eed4"} Dec 09 14:58:19 crc kubenswrapper[5107]: I1209 14:58:19.831676 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jx4fv" event={"ID":"b45380af-d55c-4f77-9385-8218e990c675","Type":"ContainerStarted","Data":"f74a888f137fb6b32f9a0f5726094b05a78e89b174ae9949b613f1df90592054"} Dec 09 14:58:19 crc kubenswrapper[5107]: I1209 14:58:19.906684 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:19 crc kubenswrapper[5107]: E1209 14:58:19.906868 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:20.406824583 +0000 UTC m=+148.130529472 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:19 crc kubenswrapper[5107]: I1209 14:58:19.907321 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:19 crc kubenswrapper[5107]: E1209 14:58:19.907914 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:20.407900772 +0000 UTC m=+148.131605661 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:20 crc kubenswrapper[5107]: I1209 14:58:20.008933 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:20 crc kubenswrapper[5107]: E1209 14:58:20.009916 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:20.509899907 +0000 UTC m=+148.233604796 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:20 crc kubenswrapper[5107]: I1209 14:58:20.111702 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:20 crc kubenswrapper[5107]: E1209 14:58:20.112253 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:20.612231562 +0000 UTC m=+148.335936451 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:20 crc kubenswrapper[5107]: I1209 14:58:20.208773 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/revision-pruner-6-crc" podStartSLOduration=7.20875027 podStartE2EDuration="7.20875027s" podCreationTimestamp="2025-12-09 14:58:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:20.200316262 +0000 UTC m=+147.924021161" watchObservedRunningTime="2025-12-09 14:58:20.20875027 +0000 UTC m=+147.932455149" Dec 09 14:58:20 crc kubenswrapper[5107]: I1209 14:58:20.212818 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:20 crc kubenswrapper[5107]: E1209 14:58:20.213051 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:20.713010705 +0000 UTC m=+148.436715584 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:20 crc kubenswrapper[5107]: I1209 14:58:20.213909 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:20 crc kubenswrapper[5107]: E1209 14:58:20.214638 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:20.714629609 +0000 UTC m=+148.438334498 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:20 crc kubenswrapper[5107]: I1209 14:58:20.316198 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:20 crc kubenswrapper[5107]: E1209 14:58:20.316457 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:20.816426409 +0000 UTC m=+148.540131298 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:20 crc kubenswrapper[5107]: I1209 14:58:20.316728 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:20 crc kubenswrapper[5107]: E1209 14:58:20.317352 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:20.817326783 +0000 UTC m=+148.541031672 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:20 crc kubenswrapper[5107]: I1209 14:58:20.419109 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:20 crc kubenswrapper[5107]: E1209 14:58:20.419516 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:20.919475243 +0000 UTC m=+148.643180132 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:20 crc kubenswrapper[5107]: I1209 14:58:20.420605 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:20 crc kubenswrapper[5107]: E1209 14:58:20.422417 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:20.922393622 +0000 UTC m=+148.646098511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:20 crc kubenswrapper[5107]: I1209 14:58:20.527321 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:20 crc kubenswrapper[5107]: E1209 14:58:20.527677 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:21.027645135 +0000 UTC m=+148.751350034 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:20 crc kubenswrapper[5107]: I1209 14:58:20.565760 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-bftlm" Dec 09 14:58:20 crc kubenswrapper[5107]: E1209 14:58:20.571072 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf2904ed53d6c4b936a0beb3243fced515b65705ae67b2c3ded7e531c0087448" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 09 14:58:20 crc kubenswrapper[5107]: E1209 14:58:20.581005 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf2904ed53d6c4b936a0beb3243fced515b65705ae67b2c3ded7e531c0087448" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 09 14:58:20 crc kubenswrapper[5107]: E1209 14:58:20.589065 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf2904ed53d6c4b936a0beb3243fced515b65705ae67b2c3ded7e531c0087448" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 09 14:58:20 crc kubenswrapper[5107]: E1209 14:58:20.589170 5107 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-vptvb" podUID="0b29ecdf-6004-475e-8bcb-5fffa678a02b" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 09 14:58:20 crc kubenswrapper[5107]: I1209 14:58:20.629807 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:20 crc kubenswrapper[5107]: E1209 14:58:20.630607 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:21.130276648 +0000 UTC m=+148.853981537 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:20 crc kubenswrapper[5107]: I1209 14:58:20.731171 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:20 crc kubenswrapper[5107]: E1209 14:58:20.731470 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:21.231422171 +0000 UTC m=+148.955127070 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:20 crc kubenswrapper[5107]: I1209 14:58:20.731984 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:20 crc kubenswrapper[5107]: E1209 14:58:20.733864 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:21.233841276 +0000 UTC m=+148.957546155 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:20 crc kubenswrapper[5107]: I1209 14:58:20.833820 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:20 crc kubenswrapper[5107]: E1209 14:58:20.834325 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:21.33426982 +0000 UTC m=+149.057974699 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:20 crc kubenswrapper[5107]: I1209 14:58:20.834811 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:20 crc kubenswrapper[5107]: E1209 14:58:20.835491 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:21.335473742 +0000 UTC m=+149.059178641 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:20 crc kubenswrapper[5107]: I1209 14:58:20.853830 5107 generic.go:358] "Generic (PLEG): container finished" podID="ac4fe46c-6340-4469-947f-e6e295650a97" containerID="aa8e8ab02b52ec90b4f72ab4db6bf4330004c1b7025f9d930d1f71e5653cc3b9" exitCode=0 Dec 09 14:58:20 crc kubenswrapper[5107]: I1209 14:58:20.854029 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gwhqf" event={"ID":"ac4fe46c-6340-4469-947f-e6e295650a97","Type":"ContainerDied","Data":"aa8e8ab02b52ec90b4f72ab4db6bf4330004c1b7025f9d930d1f71e5653cc3b9"} Dec 09 14:58:20 crc kubenswrapper[5107]: I1209 14:58:20.875646 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"f4d03c67-d11d-4a22-aa4e-10cc47dddbef","Type":"ContainerStarted","Data":"c5d20da5e877c178621508bf1804c884c6e0b579b2c8f3cd1570eb54c6820c33"} Dec 09 14:58:20 crc kubenswrapper[5107]: I1209 14:58:20.882121 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cg8hp" event={"ID":"9ae57eec-5514-4d0f-8d41-d78aceca7255","Type":"ContainerStarted","Data":"21a4d500256f1a9f86fa9ca9095cf1ee964735ff493c73d6cb145cd226fe59c0"} Dec 09 14:58:20 crc kubenswrapper[5107]: I1209 14:58:20.901751 5107 generic.go:358] "Generic (PLEG): container finished" podID="b45380af-d55c-4f77-9385-8218e990c675" containerID="6fbf6ea75464bac7155312f51d1c0f0ea1b0f4dbbb784b0625b26ed3c077500f" exitCode=0 Dec 09 14:58:20 crc kubenswrapper[5107]: I1209 14:58:20.901917 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jx4fv" event={"ID":"b45380af-d55c-4f77-9385-8218e990c675","Type":"ContainerDied","Data":"6fbf6ea75464bac7155312f51d1c0f0ea1b0f4dbbb784b0625b26ed3c077500f"} Dec 09 14:58:20 crc kubenswrapper[5107]: I1209 14:58:20.910658 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-11-crc" podStartSLOduration=5.910621033 podStartE2EDuration="5.910621033s" podCreationTimestamp="2025-12-09 14:58:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:20.906920372 +0000 UTC m=+148.630625261" watchObservedRunningTime="2025-12-09 14:58:20.910621033 +0000 UTC m=+148.634325922" Dec 09 14:58:20 crc kubenswrapper[5107]: I1209 14:58:20.916439 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"c49666892b4b7e72f7356f2a57a84775dfb29e14c0fb6e2e4ca0d18f3313eac9"} Dec 09 14:58:20 crc kubenswrapper[5107]: I1209 14:58:20.920507 5107 generic.go:358] "Generic (PLEG): container finished" podID="8d0b0c00-6091-44b7-a8f0-f0cc529e897a" containerID="b700365c54e374d0acfc987c428536653008f3d8b66eb99544c28da5841dca7a" exitCode=0 Dec 09 14:58:20 crc kubenswrapper[5107]: I1209 14:58:20.923486 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"8d0b0c00-6091-44b7-a8f0-f0cc529e897a","Type":"ContainerDied","Data":"b700365c54e374d0acfc987c428536653008f3d8b66eb99544c28da5841dca7a"} Dec 09 14:58:20 crc kubenswrapper[5107]: I1209 14:58:20.937799 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:20 crc kubenswrapper[5107]: E1209 14:58:20.939184 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:21.439157863 +0000 UTC m=+149.162862752 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:21 crc kubenswrapper[5107]: I1209 14:58:21.048647 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:21 crc kubenswrapper[5107]: E1209 14:58:21.048955 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:21.548940529 +0000 UTC m=+149.272645408 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:21 crc kubenswrapper[5107]: I1209 14:58:21.091185 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 14:58:21 crc kubenswrapper[5107]: I1209 14:58:21.149677 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:21 crc kubenswrapper[5107]: E1209 14:58:21.150349 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:21.650302997 +0000 UTC m=+149.374007876 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:21 crc kubenswrapper[5107]: I1209 14:58:21.252531 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:21 crc kubenswrapper[5107]: E1209 14:58:21.254504 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:21.754485571 +0000 UTC m=+149.478190460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:21 crc kubenswrapper[5107]: I1209 14:58:21.354576 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:21 crc kubenswrapper[5107]: E1209 14:58:21.354894 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:21.854840923 +0000 UTC m=+149.578545812 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:21 crc kubenswrapper[5107]: I1209 14:58:21.355192 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:21 crc kubenswrapper[5107]: E1209 14:58:21.355755 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:21.855732137 +0000 UTC m=+149.579437196 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:21 crc kubenswrapper[5107]: I1209 14:58:21.457686 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:21 crc kubenswrapper[5107]: E1209 14:58:21.457967 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:21.957946899 +0000 UTC m=+149.681651788 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:21 crc kubenswrapper[5107]: I1209 14:58:21.560660 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:21 crc kubenswrapper[5107]: E1209 14:58:21.561241 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:22.061217839 +0000 UTC m=+149.784922728 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:21 crc kubenswrapper[5107]: I1209 14:58:21.662868 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:21 crc kubenswrapper[5107]: E1209 14:58:21.663227 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:22.163208544 +0000 UTC m=+149.886913433 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:21 crc kubenswrapper[5107]: I1209 14:58:21.685134 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-v799x" Dec 09 14:58:21 crc kubenswrapper[5107]: I1209 14:58:21.791172 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:21 crc kubenswrapper[5107]: E1209 14:58:21.792142 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:22.292112217 +0000 UTC m=+150.015817096 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:21 crc kubenswrapper[5107]: I1209 14:58:21.892964 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:21 crc kubenswrapper[5107]: E1209 14:58:21.893231 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:22.393201467 +0000 UTC m=+150.116906366 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:21 crc kubenswrapper[5107]: I1209 14:58:21.893506 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:21 crc kubenswrapper[5107]: E1209 14:58:21.894107 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:22.394082762 +0000 UTC m=+150.117787651 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:21 crc kubenswrapper[5107]: I1209 14:58:21.953088 5107 generic.go:358] "Generic (PLEG): container finished" podID="f4d03c67-d11d-4a22-aa4e-10cc47dddbef" containerID="c5d20da5e877c178621508bf1804c884c6e0b579b2c8f3cd1570eb54c6820c33" exitCode=0 Dec 09 14:58:21 crc kubenswrapper[5107]: I1209 14:58:21.953291 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"f4d03c67-d11d-4a22-aa4e-10cc47dddbef","Type":"ContainerDied","Data":"c5d20da5e877c178621508bf1804c884c6e0b579b2c8f3cd1570eb54c6820c33"} Dec 09 14:58:21 crc kubenswrapper[5107]: I1209 14:58:21.967050 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-6xk48" event={"ID":"f154303d-e14b-4854-8f94-194d0f338f98","Type":"ContainerStarted","Data":"7ad7817d9973e326f6b3a70c8cf4c8126fda115fe27b5f17c931311d81193a40"} Dec 09 14:58:21 crc kubenswrapper[5107]: I1209 14:58:21.979174 5107 generic.go:358] "Generic (PLEG): container finished" podID="9ae57eec-5514-4d0f-8d41-d78aceca7255" containerID="21a4d500256f1a9f86fa9ca9095cf1ee964735ff493c73d6cb145cd226fe59c0" exitCode=0 Dec 09 14:58:21 crc kubenswrapper[5107]: I1209 14:58:21.979360 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cg8hp" event={"ID":"9ae57eec-5514-4d0f-8d41-d78aceca7255","Type":"ContainerDied","Data":"21a4d500256f1a9f86fa9ca9095cf1ee964735ff493c73d6cb145cd226fe59c0"} Dec 09 14:58:21 crc kubenswrapper[5107]: I1209 14:58:21.987144 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"2d93a298cd1a40eced5d4070f8cabaf03fcadadaa673e7b078d820d92cad136d"} Dec 09 14:58:21 crc kubenswrapper[5107]: I1209 14:58:21.988068 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:58:21 crc kubenswrapper[5107]: I1209 14:58:21.989933 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"0fb998cc7bd1375ef0a43cec1050a7f3e69ca16cb7835ea8dc30d2a997637880"} Dec 09 14:58:21 crc kubenswrapper[5107]: I1209 14:58:21.995266 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:21 crc kubenswrapper[5107]: E1209 14:58:21.995760 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:22.495711577 +0000 UTC m=+150.219416466 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:22 crc kubenswrapper[5107]: I1209 14:58:22.098046 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:22 crc kubenswrapper[5107]: E1209 14:58:22.099605 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:22.599581633 +0000 UTC m=+150.323286522 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:22 crc kubenswrapper[5107]: I1209 14:58:22.205358 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:22 crc kubenswrapper[5107]: E1209 14:58:22.206058 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:22.706006178 +0000 UTC m=+150.429711077 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:22 crc kubenswrapper[5107]: I1209 14:58:22.206945 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:22 crc kubenswrapper[5107]: E1209 14:58:22.207510 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:22.707482248 +0000 UTC m=+150.431187137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:22 crc kubenswrapper[5107]: I1209 14:58:22.248162 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 09 14:58:22 crc kubenswrapper[5107]: I1209 14:58:22.308486 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:22 crc kubenswrapper[5107]: E1209 14:58:22.308881 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:22.808864717 +0000 UTC m=+150.532569606 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:22 crc kubenswrapper[5107]: I1209 14:58:22.410296 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8d0b0c00-6091-44b7-a8f0-f0cc529e897a-kubelet-dir\") pod \"8d0b0c00-6091-44b7-a8f0-f0cc529e897a\" (UID: \"8d0b0c00-6091-44b7-a8f0-f0cc529e897a\") " Dec 09 14:58:22 crc kubenswrapper[5107]: I1209 14:58:22.410507 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d0b0c00-6091-44b7-a8f0-f0cc529e897a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8d0b0c00-6091-44b7-a8f0-f0cc529e897a" (UID: "8d0b0c00-6091-44b7-a8f0-f0cc529e897a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:58:22 crc kubenswrapper[5107]: I1209 14:58:22.410968 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d0b0c00-6091-44b7-a8f0-f0cc529e897a-kube-api-access\") pod \"8d0b0c00-6091-44b7-a8f0-f0cc529e897a\" (UID: \"8d0b0c00-6091-44b7-a8f0-f0cc529e897a\") " Dec 09 14:58:22 crc kubenswrapper[5107]: I1209 14:58:22.411293 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:22 crc kubenswrapper[5107]: I1209 14:58:22.411419 5107 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8d0b0c00-6091-44b7-a8f0-f0cc529e897a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:58:22 crc kubenswrapper[5107]: E1209 14:58:22.411803 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:22.911779618 +0000 UTC m=+150.635484677 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:22 crc kubenswrapper[5107]: I1209 14:58:22.434578 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d0b0c00-6091-44b7-a8f0-f0cc529e897a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8d0b0c00-6091-44b7-a8f0-f0cc529e897a" (UID: "8d0b0c00-6091-44b7-a8f0-f0cc529e897a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:58:22 crc kubenswrapper[5107]: I1209 14:58:22.512380 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:22 crc kubenswrapper[5107]: E1209 14:58:22.512700 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:23.012640552 +0000 UTC m=+150.736345441 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:22 crc kubenswrapper[5107]: I1209 14:58:22.514434 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:22 crc kubenswrapper[5107]: E1209 14:58:22.514853 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:23.014835251 +0000 UTC m=+150.738540140 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:22 crc kubenswrapper[5107]: I1209 14:58:22.514895 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d0b0c00-6091-44b7-a8f0-f0cc529e897a-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 14:58:22 crc kubenswrapper[5107]: I1209 14:58:22.616107 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:22 crc kubenswrapper[5107]: E1209 14:58:22.616587 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:23.116497378 +0000 UTC m=+150.840202277 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:22 crc kubenswrapper[5107]: I1209 14:58:22.617418 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:22 crc kubenswrapper[5107]: E1209 14:58:22.617979 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:23.117963478 +0000 UTC m=+150.841668367 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:22 crc kubenswrapper[5107]: I1209 14:58:22.665582 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-94qbq" Dec 09 14:58:22 crc kubenswrapper[5107]: I1209 14:58:22.668937 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-fnsxn" Dec 09 14:58:22 crc kubenswrapper[5107]: I1209 14:58:22.719263 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:22 crc kubenswrapper[5107]: E1209 14:58:22.719692 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:23.219647405 +0000 UTC m=+150.943352414 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:22 crc kubenswrapper[5107]: I1209 14:58:22.821785 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:22 crc kubenswrapper[5107]: E1209 14:58:22.822168 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:23.322150964 +0000 UTC m=+151.045855853 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:22 crc kubenswrapper[5107]: I1209 14:58:22.923046 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:22 crc kubenswrapper[5107]: E1209 14:58:22.923285 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:23.423241395 +0000 UTC m=+151.146946284 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:22 crc kubenswrapper[5107]: I1209 14:58:22.924084 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:22 crc kubenswrapper[5107]: E1209 14:58:22.924673 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:23.424658353 +0000 UTC m=+151.148363242 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:23 crc kubenswrapper[5107]: I1209 14:58:23.002835 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"8d0b0c00-6091-44b7-a8f0-f0cc529e897a","Type":"ContainerDied","Data":"8003b926c5c4b91d405c15b90dcd8bc7c07f613e7cbc43af5ff2d9a8198c0614"} Dec 09 14:58:23 crc kubenswrapper[5107]: I1209 14:58:23.002928 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8003b926c5c4b91d405c15b90dcd8bc7c07f613e7cbc43af5ff2d9a8198c0614" Dec 09 14:58:23 crc kubenswrapper[5107]: I1209 14:58:23.002867 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 09 14:58:23 crc kubenswrapper[5107]: I1209 14:58:23.025878 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:23 crc kubenswrapper[5107]: E1209 14:58:23.026345 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:23.526285859 +0000 UTC m=+151.249990748 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:23 crc kubenswrapper[5107]: I1209 14:58:23.026819 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:23 crc kubenswrapper[5107]: E1209 14:58:23.027327 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:23.527303056 +0000 UTC m=+151.251007935 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:23 crc kubenswrapper[5107]: I1209 14:58:23.039782 5107 patch_prober.go:28] interesting pod/downloads-747b44746d-bg27m container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 09 14:58:23 crc kubenswrapper[5107]: I1209 14:58:23.039906 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-bg27m" podUID="b47c5069-df03-4bb4-9b81-2213e9d95183" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 09 14:58:23 crc kubenswrapper[5107]: I1209 14:58:23.128899 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:23 crc kubenswrapper[5107]: E1209 14:58:23.129093 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:23.629065876 +0000 UTC m=+151.352770765 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:23 crc kubenswrapper[5107]: I1209 14:58:23.129276 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:23 crc kubenswrapper[5107]: E1209 14:58:23.129607 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:23.62960042 +0000 UTC m=+151.353305309 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:23 crc kubenswrapper[5107]: I1209 14:58:23.242479 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:23 crc kubenswrapper[5107]: E1209 14:58:23.242715 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:23.742675155 +0000 UTC m=+151.466380044 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:23 crc kubenswrapper[5107]: I1209 14:58:23.242887 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:23 crc kubenswrapper[5107]: E1209 14:58:23.243448 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:23.743429426 +0000 UTC m=+151.467134315 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:23 crc kubenswrapper[5107]: I1209 14:58:23.344170 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:23 crc kubenswrapper[5107]: E1209 14:58:23.344444 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:23.844397603 +0000 UTC m=+151.568102492 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:23 crc kubenswrapper[5107]: I1209 14:58:23.345077 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:23 crc kubenswrapper[5107]: E1209 14:58:23.345555 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:23.845547075 +0000 UTC m=+151.569251964 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:23 crc kubenswrapper[5107]: I1209 14:58:23.447143 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:23 crc kubenswrapper[5107]: E1209 14:58:23.447745 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:23.947710045 +0000 UTC m=+151.671414934 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:23 crc kubenswrapper[5107]: I1209 14:58:23.485241 5107 patch_prober.go:28] interesting pod/console-64d44f6ddf-cttpw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Dec 09 14:58:23 crc kubenswrapper[5107]: I1209 14:58:23.485380 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-cttpw" podUID="d293f4be-8891-4515-b52d-35a61cddfc12" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Dec 09 14:58:23 crc kubenswrapper[5107]: I1209 14:58:23.549366 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:23 crc kubenswrapper[5107]: E1209 14:58:23.550028 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:24.050003488 +0000 UTC m=+151.773708377 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:23 crc kubenswrapper[5107]: I1209 14:58:23.650769 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:23 crc kubenswrapper[5107]: E1209 14:58:23.651363 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:24.151322535 +0000 UTC m=+151.875027424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:23 crc kubenswrapper[5107]: I1209 14:58:23.755740 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:23 crc kubenswrapper[5107]: E1209 14:58:23.756354 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:24.256283421 +0000 UTC m=+151.979988310 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:23 crc kubenswrapper[5107]: I1209 14:58:23.857425 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:23 crc kubenswrapper[5107]: E1209 14:58:23.857860 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:24.357832265 +0000 UTC m=+152.081537154 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:23 crc kubenswrapper[5107]: I1209 14:58:23.957779 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 09 14:58:23 crc kubenswrapper[5107]: I1209 14:58:23.958862 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:23 crc kubenswrapper[5107]: E1209 14:58:23.959352 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:24.459317786 +0000 UTC m=+152.183022675 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:24 crc kubenswrapper[5107]: I1209 14:58:24.019935 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-g5bqc" event={"ID":"947d55c1-7cdf-48de-b10a-e783956ebbd8","Type":"ContainerStarted","Data":"64dbd6b69ee3e23b63438233f38c807520abd7bedc9357e55efa05816acba325"} Dec 09 14:58:24 crc kubenswrapper[5107]: I1209 14:58:24.025747 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"f4d03c67-d11d-4a22-aa4e-10cc47dddbef","Type":"ContainerDied","Data":"e1753e49c606a184a806b205e271162512b1d8e40226ca8a1af1bb8af082a9b8"} Dec 09 14:58:24 crc kubenswrapper[5107]: I1209 14:58:24.025821 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 09 14:58:24 crc kubenswrapper[5107]: I1209 14:58:24.025872 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1753e49c606a184a806b205e271162512b1d8e40226ca8a1af1bb8af082a9b8" Dec 09 14:58:24 crc kubenswrapper[5107]: I1209 14:58:24.060309 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4d03c67-d11d-4a22-aa4e-10cc47dddbef-kube-api-access\") pod \"f4d03c67-d11d-4a22-aa4e-10cc47dddbef\" (UID: \"f4d03c67-d11d-4a22-aa4e-10cc47dddbef\") " Dec 09 14:58:24 crc kubenswrapper[5107]: I1209 14:58:24.060428 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f4d03c67-d11d-4a22-aa4e-10cc47dddbef-kubelet-dir\") pod \"f4d03c67-d11d-4a22-aa4e-10cc47dddbef\" (UID: \"f4d03c67-d11d-4a22-aa4e-10cc47dddbef\") " Dec 09 14:58:24 crc kubenswrapper[5107]: I1209 14:58:24.060603 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4d03c67-d11d-4a22-aa4e-10cc47dddbef-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f4d03c67-d11d-4a22-aa4e-10cc47dddbef" (UID: "f4d03c67-d11d-4a22-aa4e-10cc47dddbef"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:58:24 crc kubenswrapper[5107]: I1209 14:58:24.060769 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:24 crc kubenswrapper[5107]: I1209 14:58:24.061096 5107 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f4d03c67-d11d-4a22-aa4e-10cc47dddbef-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:58:24 crc kubenswrapper[5107]: E1209 14:58:24.061855 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:24.561827515 +0000 UTC m=+152.285532394 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:24 crc kubenswrapper[5107]: I1209 14:58:24.093094 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4d03c67-d11d-4a22-aa4e-10cc47dddbef-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f4d03c67-d11d-4a22-aa4e-10cc47dddbef" (UID: "f4d03c67-d11d-4a22-aa4e-10cc47dddbef"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:58:24 crc kubenswrapper[5107]: I1209 14:58:24.162460 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:24 crc kubenswrapper[5107]: E1209 14:58:24.162863 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:24.662843274 +0000 UTC m=+152.386548163 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:24 crc kubenswrapper[5107]: I1209 14:58:24.163104 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4d03c67-d11d-4a22-aa4e-10cc47dddbef-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 14:58:24 crc kubenswrapper[5107]: I1209 14:58:24.264986 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:24 crc kubenswrapper[5107]: E1209 14:58:24.265196 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:24.765162988 +0000 UTC m=+152.488867877 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:24 crc kubenswrapper[5107]: I1209 14:58:24.265304 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:24 crc kubenswrapper[5107]: E1209 14:58:24.265664 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:24.765656272 +0000 UTC m=+152.489361161 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:24 crc kubenswrapper[5107]: I1209 14:58:24.366617 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:24 crc kubenswrapper[5107]: E1209 14:58:24.366883 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:24.866842995 +0000 UTC m=+152.590547894 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:24 crc kubenswrapper[5107]: I1209 14:58:24.367226 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:24 crc kubenswrapper[5107]: E1209 14:58:24.367613 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:24.867597986 +0000 UTC m=+152.591302905 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:24 crc kubenswrapper[5107]: I1209 14:58:24.477672 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:24 crc kubenswrapper[5107]: E1209 14:58:24.479358 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:24.97916956 +0000 UTC m=+152.702874449 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:24 crc kubenswrapper[5107]: I1209 14:58:24.580305 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:24 crc kubenswrapper[5107]: E1209 14:58:24.580910 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:25.080885508 +0000 UTC m=+152.804590397 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:24 crc kubenswrapper[5107]: I1209 14:58:24.682060 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:24 crc kubenswrapper[5107]: E1209 14:58:24.682503 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:25.182413 +0000 UTC m=+152.906117889 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:24 crc kubenswrapper[5107]: I1209 14:58:24.685420 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:24 crc kubenswrapper[5107]: E1209 14:58:24.685932 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:25.185918136 +0000 UTC m=+152.909623025 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:24 crc kubenswrapper[5107]: I1209 14:58:24.790421 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:24 crc kubenswrapper[5107]: E1209 14:58:24.791020 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:25.290997014 +0000 UTC m=+153.014701903 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:24 crc kubenswrapper[5107]: I1209 14:58:24.862027 5107 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 09 14:58:24 crc kubenswrapper[5107]: I1209 14:58:24.892810 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:24 crc kubenswrapper[5107]: E1209 14:58:24.893378 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:25.393349859 +0000 UTC m=+153.117054748 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:24 crc kubenswrapper[5107]: I1209 14:58:24.994350 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:24 crc kubenswrapper[5107]: E1209 14:58:24.994651 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:58:25.494627965 +0000 UTC m=+153.218332854 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:25 crc kubenswrapper[5107]: I1209 14:58:25.033798 5107 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-09T14:58:24.862072705Z","UUID":"b770518d-33eb-47f9-9dba-7d0ca5876809","Handler":null,"Name":"","Endpoint":""} Dec 09 14:58:25 crc kubenswrapper[5107]: I1209 14:58:25.056063 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-6xk48" event={"ID":"f154303d-e14b-4854-8f94-194d0f338f98","Type":"ContainerStarted","Data":"e0108986f26de72918a409792a1960b1a87cb42045e64ea611f8299b5c81b81c"} Dec 09 14:58:25 crc kubenswrapper[5107]: I1209 14:58:25.072388 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-6xk48" podStartSLOduration=132.072355466 podStartE2EDuration="2m12.072355466s" podCreationTimestamp="2025-12-09 14:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:25.070221057 +0000 UTC m=+152.793925946" watchObservedRunningTime="2025-12-09 14:58:25.072355466 +0000 UTC m=+152.796060365" Dec 09 14:58:25 crc kubenswrapper[5107]: I1209 14:58:25.097029 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:25 crc kubenswrapper[5107]: E1209 14:58:25.097674 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:58:25.597647359 +0000 UTC m=+153.321352398 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-9kn5t" (UID: "baa70a71-f986-4810-8d66-a6313df5d522") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:58:25 crc kubenswrapper[5107]: I1209 14:58:25.136787 5107 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 09 14:58:25 crc kubenswrapper[5107]: I1209 14:58:25.136872 5107 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 09 14:58:25 crc kubenswrapper[5107]: I1209 14:58:25.198714 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:58:25 crc kubenswrapper[5107]: I1209 14:58:25.204979 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Dec 09 14:58:25 crc kubenswrapper[5107]: I1209 14:58:25.302288 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:26 crc kubenswrapper[5107]: I1209 14:58:26.037586 5107 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 14:58:26 crc kubenswrapper[5107]: I1209 14:58:26.037670 5107 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:26 crc kubenswrapper[5107]: I1209 14:58:26.105823 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-9kn5t\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:26 crc kubenswrapper[5107]: I1209 14:58:26.321716 5107 patch_prober.go:28] interesting pod/downloads-747b44746d-bg27m container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 09 14:58:26 crc kubenswrapper[5107]: I1209 14:58:26.321851 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-bg27m" podUID="b47c5069-df03-4bb4-9b81-2213e9d95183" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 09 14:58:26 crc kubenswrapper[5107]: I1209 14:58:26.352324 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:26 crc kubenswrapper[5107]: I1209 14:58:26.737879 5107 ???:1] "http: TLS handshake error from 192.168.126.11:60232: no serving certificate available for the kubelet" Dec 09 14:58:26 crc kubenswrapper[5107]: I1209 14:58:26.827586 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Dec 09 14:58:30 crc kubenswrapper[5107]: E1209 14:58:30.564787 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf2904ed53d6c4b936a0beb3243fced515b65705ae67b2c3ded7e531c0087448" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 09 14:58:30 crc kubenswrapper[5107]: E1209 14:58:30.567370 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf2904ed53d6c4b936a0beb3243fced515b65705ae67b2c3ded7e531c0087448" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 09 14:58:30 crc kubenswrapper[5107]: E1209 14:58:30.569951 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf2904ed53d6c4b936a0beb3243fced515b65705ae67b2c3ded7e531c0087448" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 09 14:58:30 crc kubenswrapper[5107]: E1209 14:58:30.570147 5107 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-vptvb" podUID="0b29ecdf-6004-475e-8bcb-5fffa678a02b" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 09 14:58:31 crc kubenswrapper[5107]: I1209 14:58:31.115171 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-g5bqc" event={"ID":"947d55c1-7cdf-48de-b10a-e783956ebbd8","Type":"ContainerStarted","Data":"771b019d5af630f38782de5c9483abcd1f5e0a5e16cb990273a61ac8a485d789"} Dec 09 14:58:33 crc kubenswrapper[5107]: I1209 14:58:33.509518 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-cttpw" Dec 09 14:58:33 crc kubenswrapper[5107]: I1209 14:58:33.525117 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-cttpw" Dec 09 14:58:36 crc kubenswrapper[5107]: I1209 14:58:36.336773 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-bg27m" Dec 09 14:58:40 crc kubenswrapper[5107]: E1209 14:58:40.562507 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf2904ed53d6c4b936a0beb3243fced515b65705ae67b2c3ded7e531c0087448" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 09 14:58:40 crc kubenswrapper[5107]: E1209 14:58:40.566387 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf2904ed53d6c4b936a0beb3243fced515b65705ae67b2c3ded7e531c0087448" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 09 14:58:40 crc kubenswrapper[5107]: E1209 14:58:40.567928 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf2904ed53d6c4b936a0beb3243fced515b65705ae67b2c3ded7e531c0087448" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 09 14:58:40 crc kubenswrapper[5107]: E1209 14:58:40.568095 5107 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-vptvb" podUID="0b29ecdf-6004-475e-8bcb-5fffa678a02b" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 09 14:58:41 crc kubenswrapper[5107]: I1209 14:58:41.225095 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-9kn5t"] Dec 09 14:58:42 crc kubenswrapper[5107]: I1209 14:58:42.236065 5107 generic.go:358] "Generic (PLEG): container finished" podID="80fe473c-479a-4083-88ed-ff9ec66558b9" containerID="b88f8460101fc73127c9e0cfa7000a1ad577466288123cc947b60c7a21a73ac2" exitCode=0 Dec 09 14:58:42 crc kubenswrapper[5107]: I1209 14:58:42.236177 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnpmv" event={"ID":"80fe473c-479a-4083-88ed-ff9ec66558b9","Type":"ContainerDied","Data":"b88f8460101fc73127c9e0cfa7000a1ad577466288123cc947b60c7a21a73ac2"} Dec 09 14:58:42 crc kubenswrapper[5107]: I1209 14:58:42.239624 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmk4n" event={"ID":"6243a4ba-2331-4ff6-8d02-0d7cda7bb73e","Type":"ContainerStarted","Data":"4470f97758135863ddf0c0d1dd2d807a42a078968ba74caa0b619f81bf15463e"} Dec 09 14:58:42 crc kubenswrapper[5107]: I1209 14:58:42.257180 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" event={"ID":"baa70a71-f986-4810-8d66-a6313df5d522","Type":"ContainerStarted","Data":"b948b6def0ccd4afa1cda2751aa44b8af181311d2c053c06503f16f9856b0d4f"} Dec 09 14:58:42 crc kubenswrapper[5107]: I1209 14:58:42.257279 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" event={"ID":"baa70a71-f986-4810-8d66-a6313df5d522","Type":"ContainerStarted","Data":"ac485abf21d0e0d8cc62de8ec8b33d7f0dd8b7f38c6299ab7de7bba85ef4098c"} Dec 09 14:58:42 crc kubenswrapper[5107]: I1209 14:58:42.257538 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:58:42 crc kubenswrapper[5107]: I1209 14:58:42.264413 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2zc87" event={"ID":"7c53c28b-bc39-454a-ad61-1de7109f45ee","Type":"ContainerStarted","Data":"615fb757de959794b25709af24cfd1e85f0feb0472afe3da6e8b99b9f85b0a39"} Dec 09 14:58:42 crc kubenswrapper[5107]: I1209 14:58:42.266811 5107 generic.go:358] "Generic (PLEG): container finished" podID="ac4fe46c-6340-4469-947f-e6e295650a97" containerID="856f0f18b2d352256c6a0b740ad281810f7139569fe7ae48c7b232e8c613c52f" exitCode=0 Dec 09 14:58:42 crc kubenswrapper[5107]: I1209 14:58:42.267114 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gwhqf" event={"ID":"ac4fe46c-6340-4469-947f-e6e295650a97","Type":"ContainerDied","Data":"856f0f18b2d352256c6a0b740ad281810f7139569fe7ae48c7b232e8c613c52f"} Dec 09 14:58:42 crc kubenswrapper[5107]: I1209 14:58:42.285942 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7hcq" event={"ID":"21f1c435-27a8-4463-97da-af76d49f0e7a","Type":"ContainerDied","Data":"da380b4b3518805fe07530fd879e16663290090c1093dd26824c643acc237b23"} Dec 09 14:58:42 crc kubenswrapper[5107]: I1209 14:58:42.285783 5107 generic.go:358] "Generic (PLEG): container finished" podID="21f1c435-27a8-4463-97da-af76d49f0e7a" containerID="da380b4b3518805fe07530fd879e16663290090c1093dd26824c643acc237b23" exitCode=0 Dec 09 14:58:42 crc kubenswrapper[5107]: I1209 14:58:42.289925 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cg8hp" event={"ID":"9ae57eec-5514-4d0f-8d41-d78aceca7255","Type":"ContainerStarted","Data":"f94391c9d011277ffc10c4542e97c7034e4d548da52ce3de41e88d651411f03a"} Dec 09 14:58:42 crc kubenswrapper[5107]: I1209 14:58:42.298288 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jx4fv" event={"ID":"b45380af-d55c-4f77-9385-8218e990c675","Type":"ContainerStarted","Data":"c29fcc40242a72f4a8eda598797655fd69094fc05c871993b7b0a3bb9da2f74d"} Dec 09 14:58:42 crc kubenswrapper[5107]: I1209 14:58:42.301976 5107 generic.go:358] "Generic (PLEG): container finished" podID="6a7f2680-23c6-4334-a9dd-c4328ea41821" containerID="84b616f640066e8765503006713341cc2a63e0f11fb84451a0d7446d29d26d92" exitCode=0 Dec 09 14:58:42 crc kubenswrapper[5107]: I1209 14:58:42.302069 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lcv6m" event={"ID":"6a7f2680-23c6-4334-a9dd-c4328ea41821","Type":"ContainerDied","Data":"84b616f640066e8765503006713341cc2a63e0f11fb84451a0d7446d29d26d92"} Dec 09 14:58:42 crc kubenswrapper[5107]: I1209 14:58:42.319460 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" podStartSLOduration=149.319437874 podStartE2EDuration="2m29.319437874s" podCreationTimestamp="2025-12-09 14:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:42.307957964 +0000 UTC m=+170.031662873" watchObservedRunningTime="2025-12-09 14:58:42.319437874 +0000 UTC m=+170.043142763" Dec 09 14:58:43 crc kubenswrapper[5107]: I1209 14:58:43.311727 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-g5bqc" event={"ID":"947d55c1-7cdf-48de-b10a-e783956ebbd8","Type":"ContainerStarted","Data":"38e4f9d857db3550f5f5de2fb0dddf5cb04606863094cca7875a852076e8e18f"} Dec 09 14:58:43 crc kubenswrapper[5107]: I1209 14:58:43.314368 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnpmv" event={"ID":"80fe473c-479a-4083-88ed-ff9ec66558b9","Type":"ContainerStarted","Data":"e02597c9c9ae72b43109fb537dd6fa2210b59c80f0dfcad26e8e4d126ddbce6a"} Dec 09 14:58:43 crc kubenswrapper[5107]: I1209 14:58:43.315945 5107 generic.go:358] "Generic (PLEG): container finished" podID="6243a4ba-2331-4ff6-8d02-0d7cda7bb73e" containerID="4470f97758135863ddf0c0d1dd2d807a42a078968ba74caa0b619f81bf15463e" exitCode=0 Dec 09 14:58:43 crc kubenswrapper[5107]: I1209 14:58:43.316071 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmk4n" event={"ID":"6243a4ba-2331-4ff6-8d02-0d7cda7bb73e","Type":"ContainerDied","Data":"4470f97758135863ddf0c0d1dd2d807a42a078968ba74caa0b619f81bf15463e"} Dec 09 14:58:43 crc kubenswrapper[5107]: I1209 14:58:43.318561 5107 generic.go:358] "Generic (PLEG): container finished" podID="7c53c28b-bc39-454a-ad61-1de7109f45ee" containerID="615fb757de959794b25709af24cfd1e85f0feb0472afe3da6e8b99b9f85b0a39" exitCode=0 Dec 09 14:58:43 crc kubenswrapper[5107]: I1209 14:58:43.318607 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2zc87" event={"ID":"7c53c28b-bc39-454a-ad61-1de7109f45ee","Type":"ContainerDied","Data":"615fb757de959794b25709af24cfd1e85f0feb0472afe3da6e8b99b9f85b0a39"} Dec 09 14:58:43 crc kubenswrapper[5107]: I1209 14:58:43.831514 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-22x2p" Dec 09 14:58:43 crc kubenswrapper[5107]: I1209 14:58:43.851959 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-g5bqc" podStartSLOduration=41.851941987000004 podStartE2EDuration="41.851941987s" podCreationTimestamp="2025-12-09 14:58:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:43.612033665 +0000 UTC m=+171.335738554" watchObservedRunningTime="2025-12-09 14:58:43.851941987 +0000 UTC m=+171.575646876" Dec 09 14:58:44 crc kubenswrapper[5107]: I1209 14:58:44.325542 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-vptvb_0b29ecdf-6004-475e-8bcb-5fffa678a02b/kube-multus-additional-cni-plugins/0.log" Dec 09 14:58:44 crc kubenswrapper[5107]: I1209 14:58:44.325849 5107 generic.go:358] "Generic (PLEG): container finished" podID="0b29ecdf-6004-475e-8bcb-5fffa678a02b" containerID="bf2904ed53d6c4b936a0beb3243fced515b65705ae67b2c3ded7e531c0087448" exitCode=137 Dec 09 14:58:44 crc kubenswrapper[5107]: I1209 14:58:44.325937 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-vptvb" event={"ID":"0b29ecdf-6004-475e-8bcb-5fffa678a02b","Type":"ContainerDied","Data":"bf2904ed53d6c4b936a0beb3243fced515b65705ae67b2c3ded7e531c0087448"} Dec 09 14:58:44 crc kubenswrapper[5107]: I1209 14:58:44.328081 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gwhqf" event={"ID":"ac4fe46c-6340-4469-947f-e6e295650a97","Type":"ContainerStarted","Data":"4867d9b6c21dbfd9b30e9fdc488b524562098164e70fddca55717745589514fa"} Dec 09 14:58:44 crc kubenswrapper[5107]: I1209 14:58:44.330503 5107 generic.go:358] "Generic (PLEG): container finished" podID="9ae57eec-5514-4d0f-8d41-d78aceca7255" containerID="f94391c9d011277ffc10c4542e97c7034e4d548da52ce3de41e88d651411f03a" exitCode=0 Dec 09 14:58:44 crc kubenswrapper[5107]: I1209 14:58:44.330555 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cg8hp" event={"ID":"9ae57eec-5514-4d0f-8d41-d78aceca7255","Type":"ContainerDied","Data":"f94391c9d011277ffc10c4542e97c7034e4d548da52ce3de41e88d651411f03a"} Dec 09 14:58:44 crc kubenswrapper[5107]: I1209 14:58:44.332502 5107 generic.go:358] "Generic (PLEG): container finished" podID="b45380af-d55c-4f77-9385-8218e990c675" containerID="c29fcc40242a72f4a8eda598797655fd69094fc05c871993b7b0a3bb9da2f74d" exitCode=0 Dec 09 14:58:44 crc kubenswrapper[5107]: I1209 14:58:44.332555 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jx4fv" event={"ID":"b45380af-d55c-4f77-9385-8218e990c675","Type":"ContainerDied","Data":"c29fcc40242a72f4a8eda598797655fd69094fc05c871993b7b0a3bb9da2f74d"} Dec 09 14:58:45 crc kubenswrapper[5107]: I1209 14:58:45.343939 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7hcq" event={"ID":"21f1c435-27a8-4463-97da-af76d49f0e7a","Type":"ContainerStarted","Data":"495543fe863bfda45b89a6d4b2bbcffd88d5d2e42b87a669527026f197fffcf2"} Dec 09 14:58:45 crc kubenswrapper[5107]: I1209 14:58:45.401985 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rnpmv" podStartSLOduration=10.092302179 podStartE2EDuration="31.401959293s" podCreationTimestamp="2025-12-09 14:58:14 +0000 UTC" firstStartedPulling="2025-12-09 14:58:19.822157325 +0000 UTC m=+147.545862214" lastFinishedPulling="2025-12-09 14:58:41.131814439 +0000 UTC m=+168.855519328" observedRunningTime="2025-12-09 14:58:45.400520664 +0000 UTC m=+173.124225553" watchObservedRunningTime="2025-12-09 14:58:45.401959293 +0000 UTC m=+173.125664182" Dec 09 14:58:45 crc kubenswrapper[5107]: I1209 14:58:45.419392 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gwhqf" podStartSLOduration=11.103706094 podStartE2EDuration="31.419369633s" podCreationTimestamp="2025-12-09 14:58:14 +0000 UTC" firstStartedPulling="2025-12-09 14:58:20.857729924 +0000 UTC m=+148.581434813" lastFinishedPulling="2025-12-09 14:58:41.173393463 +0000 UTC m=+168.897098352" observedRunningTime="2025-12-09 14:58:45.418775187 +0000 UTC m=+173.142480096" watchObservedRunningTime="2025-12-09 14:58:45.419369633 +0000 UTC m=+173.143074522" Dec 09 14:58:45 crc kubenswrapper[5107]: I1209 14:58:45.533319 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-vptvb_0b29ecdf-6004-475e-8bcb-5fffa678a02b/kube-multus-additional-cni-plugins/0.log" Dec 09 14:58:45 crc kubenswrapper[5107]: I1209 14:58:45.533434 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-vptvb" Dec 09 14:58:45 crc kubenswrapper[5107]: I1209 14:58:45.674451 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkqq6\" (UniqueName: \"kubernetes.io/projected/0b29ecdf-6004-475e-8bcb-5fffa678a02b-kube-api-access-zkqq6\") pod \"0b29ecdf-6004-475e-8bcb-5fffa678a02b\" (UID: \"0b29ecdf-6004-475e-8bcb-5fffa678a02b\") " Dec 09 14:58:45 crc kubenswrapper[5107]: I1209 14:58:45.674670 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/0b29ecdf-6004-475e-8bcb-5fffa678a02b-ready\") pod \"0b29ecdf-6004-475e-8bcb-5fffa678a02b\" (UID: \"0b29ecdf-6004-475e-8bcb-5fffa678a02b\") " Dec 09 14:58:45 crc kubenswrapper[5107]: I1209 14:58:45.674788 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0b29ecdf-6004-475e-8bcb-5fffa678a02b-cni-sysctl-allowlist\") pod \"0b29ecdf-6004-475e-8bcb-5fffa678a02b\" (UID: \"0b29ecdf-6004-475e-8bcb-5fffa678a02b\") " Dec 09 14:58:45 crc kubenswrapper[5107]: I1209 14:58:45.674832 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0b29ecdf-6004-475e-8bcb-5fffa678a02b-tuning-conf-dir\") pod \"0b29ecdf-6004-475e-8bcb-5fffa678a02b\" (UID: \"0b29ecdf-6004-475e-8bcb-5fffa678a02b\") " Dec 09 14:58:45 crc kubenswrapper[5107]: I1209 14:58:45.675007 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b29ecdf-6004-475e-8bcb-5fffa678a02b-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "0b29ecdf-6004-475e-8bcb-5fffa678a02b" (UID: "0b29ecdf-6004-475e-8bcb-5fffa678a02b"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:58:45 crc kubenswrapper[5107]: I1209 14:58:45.675104 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b29ecdf-6004-475e-8bcb-5fffa678a02b-ready" (OuterVolumeSpecName: "ready") pod "0b29ecdf-6004-475e-8bcb-5fffa678a02b" (UID: "0b29ecdf-6004-475e-8bcb-5fffa678a02b"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:58:45 crc kubenswrapper[5107]: I1209 14:58:45.675883 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b29ecdf-6004-475e-8bcb-5fffa678a02b-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "0b29ecdf-6004-475e-8bcb-5fffa678a02b" (UID: "0b29ecdf-6004-475e-8bcb-5fffa678a02b"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:58:45 crc kubenswrapper[5107]: I1209 14:58:45.681832 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b29ecdf-6004-475e-8bcb-5fffa678a02b-kube-api-access-zkqq6" (OuterVolumeSpecName: "kube-api-access-zkqq6") pod "0b29ecdf-6004-475e-8bcb-5fffa678a02b" (UID: "0b29ecdf-6004-475e-8bcb-5fffa678a02b"). InnerVolumeSpecName "kube-api-access-zkqq6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:58:45 crc kubenswrapper[5107]: I1209 14:58:45.776300 5107 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0b29ecdf-6004-475e-8bcb-5fffa678a02b-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 09 14:58:45 crc kubenswrapper[5107]: I1209 14:58:45.776695 5107 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0b29ecdf-6004-475e-8bcb-5fffa678a02b-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:58:45 crc kubenswrapper[5107]: I1209 14:58:45.776722 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zkqq6\" (UniqueName: \"kubernetes.io/projected/0b29ecdf-6004-475e-8bcb-5fffa678a02b-kube-api-access-zkqq6\") on node \"crc\" DevicePath \"\"" Dec 09 14:58:45 crc kubenswrapper[5107]: I1209 14:58:45.776733 5107 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/0b29ecdf-6004-475e-8bcb-5fffa678a02b-ready\") on node \"crc\" DevicePath \"\"" Dec 09 14:58:46 crc kubenswrapper[5107]: I1209 14:58:46.353328 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lcv6m" event={"ID":"6a7f2680-23c6-4334-a9dd-c4328ea41821","Type":"ContainerStarted","Data":"453d4341913a9a69563d38214e9654c1e4f5bdf5868caaf2d6ab1256d8594905"} Dec 09 14:58:46 crc kubenswrapper[5107]: I1209 14:58:46.355158 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmk4n" event={"ID":"6243a4ba-2331-4ff6-8d02-0d7cda7bb73e","Type":"ContainerStarted","Data":"d898669a4dd47b98f443572974b15338d8bd7e33f3ebe18620fc58015b5a776c"} Dec 09 14:58:46 crc kubenswrapper[5107]: I1209 14:58:46.356566 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-vptvb_0b29ecdf-6004-475e-8bcb-5fffa678a02b/kube-multus-additional-cni-plugins/0.log" Dec 09 14:58:46 crc kubenswrapper[5107]: I1209 14:58:46.356665 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-vptvb" event={"ID":"0b29ecdf-6004-475e-8bcb-5fffa678a02b","Type":"ContainerDied","Data":"866ebdaf335380dbc828fc419f1063a54f63408d6eaed597a71f4f3b5c63dc29"} Dec 09 14:58:46 crc kubenswrapper[5107]: I1209 14:58:46.356714 5107 scope.go:117] "RemoveContainer" containerID="bf2904ed53d6c4b936a0beb3243fced515b65705ae67b2c3ded7e531c0087448" Dec 09 14:58:46 crc kubenswrapper[5107]: I1209 14:58:46.356840 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-vptvb" Dec 09 14:58:46 crc kubenswrapper[5107]: I1209 14:58:46.361588 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2zc87" event={"ID":"7c53c28b-bc39-454a-ad61-1de7109f45ee","Type":"ContainerStarted","Data":"17f64405ec1e3e6f73af547c0e0b4d010e43f22e19107442682003908b15f1d5"} Dec 09 14:58:46 crc kubenswrapper[5107]: I1209 14:58:46.365715 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jx4fv" event={"ID":"b45380af-d55c-4f77-9385-8218e990c675","Type":"ContainerStarted","Data":"8f797238b5601d0b64b4614d7d2908c9423e76069c71eb008fc3316b70d1b9cc"} Dec 09 14:58:46 crc kubenswrapper[5107]: I1209 14:58:46.373327 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lcv6m" podStartSLOduration=11.756482474 podStartE2EDuration="34.373312524s" podCreationTimestamp="2025-12-09 14:58:12 +0000 UTC" firstStartedPulling="2025-12-09 14:58:18.542562405 +0000 UTC m=+146.266267294" lastFinishedPulling="2025-12-09 14:58:41.159392455 +0000 UTC m=+168.883097344" observedRunningTime="2025-12-09 14:58:46.372481663 +0000 UTC m=+174.096186552" watchObservedRunningTime="2025-12-09 14:58:46.373312524 +0000 UTC m=+174.097017413" Dec 09 14:58:46 crc kubenswrapper[5107]: I1209 14:58:46.412344 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z7hcq" podStartSLOduration=8.232098711 podStartE2EDuration="34.412309778s" podCreationTimestamp="2025-12-09 14:58:12 +0000 UTC" firstStartedPulling="2025-12-09 14:58:14.95187651 +0000 UTC m=+142.675581399" lastFinishedPulling="2025-12-09 14:58:41.132087577 +0000 UTC m=+168.855792466" observedRunningTime="2025-12-09 14:58:46.41159932 +0000 UTC m=+174.135304219" watchObservedRunningTime="2025-12-09 14:58:46.412309778 +0000 UTC m=+174.136014667" Dec 09 14:58:46 crc kubenswrapper[5107]: I1209 14:58:46.439834 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-vptvb"] Dec 09 14:58:46 crc kubenswrapper[5107]: I1209 14:58:46.443287 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-vptvb"] Dec 09 14:58:46 crc kubenswrapper[5107]: I1209 14:58:46.825818 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b29ecdf-6004-475e-8bcb-5fffa678a02b" path="/var/lib/kubelet/pods/0b29ecdf-6004-475e-8bcb-5fffa678a02b/volumes" Dec 09 14:58:47 crc kubenswrapper[5107]: I1209 14:58:47.254398 5107 ???:1] "http: TLS handshake error from 192.168.126.11:41446: no serving certificate available for the kubelet" Dec 09 14:58:47 crc kubenswrapper[5107]: I1209 14:58:47.397505 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cg8hp" event={"ID":"9ae57eec-5514-4d0f-8d41-d78aceca7255","Type":"ContainerStarted","Data":"d83e23a01334b4aaeef1ad7770574c636f7aba8633c3b5e57c5986d1facd1537"} Dec 09 14:58:47 crc kubenswrapper[5107]: I1209 14:58:47.445001 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cg8hp" podStartSLOduration=12.254794214 podStartE2EDuration="31.444974897s" podCreationTimestamp="2025-12-09 14:58:16 +0000 UTC" firstStartedPulling="2025-12-09 14:58:21.980415484 +0000 UTC m=+149.704120373" lastFinishedPulling="2025-12-09 14:58:41.170596167 +0000 UTC m=+168.894301056" observedRunningTime="2025-12-09 14:58:47.442775587 +0000 UTC m=+175.166480476" watchObservedRunningTime="2025-12-09 14:58:47.444974897 +0000 UTC m=+175.168679786" Dec 09 14:58:47 crc kubenswrapper[5107]: I1209 14:58:47.446671 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jx4fv" podStartSLOduration=12.08976334 podStartE2EDuration="32.446659383s" podCreationTimestamp="2025-12-09 14:58:15 +0000 UTC" firstStartedPulling="2025-12-09 14:58:20.903031727 +0000 UTC m=+148.626736616" lastFinishedPulling="2025-12-09 14:58:41.25992777 +0000 UTC m=+168.983632659" observedRunningTime="2025-12-09 14:58:47.419586042 +0000 UTC m=+175.143290941" watchObservedRunningTime="2025-12-09 14:58:47.446659383 +0000 UTC m=+175.170364282" Dec 09 14:58:47 crc kubenswrapper[5107]: I1209 14:58:47.468010 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2zc87" podStartSLOduration=14.108116768 podStartE2EDuration="35.467988879s" podCreationTimestamp="2025-12-09 14:58:12 +0000 UTC" firstStartedPulling="2025-12-09 14:58:19.813852601 +0000 UTC m=+147.537557490" lastFinishedPulling="2025-12-09 14:58:41.173724712 +0000 UTC m=+168.897429601" observedRunningTime="2025-12-09 14:58:47.466525569 +0000 UTC m=+175.190230468" watchObservedRunningTime="2025-12-09 14:58:47.467988879 +0000 UTC m=+175.191693768" Dec 09 14:58:47 crc kubenswrapper[5107]: I1209 14:58:47.490833 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vmk4n" podStartSLOduration=12.667183271 podStartE2EDuration="35.490815396s" podCreationTimestamp="2025-12-09 14:58:12 +0000 UTC" firstStartedPulling="2025-12-09 14:58:18.350530558 +0000 UTC m=+146.074235447" lastFinishedPulling="2025-12-09 14:58:41.174162683 +0000 UTC m=+168.897867572" observedRunningTime="2025-12-09 14:58:47.485233315 +0000 UTC m=+175.208938204" watchObservedRunningTime="2025-12-09 14:58:47.490815396 +0000 UTC m=+175.214520285" Dec 09 14:58:48 crc kubenswrapper[5107]: I1209 14:58:48.549008 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cg8hp" Dec 09 14:58:48 crc kubenswrapper[5107]: I1209 14:58:48.549072 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-cg8hp" Dec 09 14:58:50 crc kubenswrapper[5107]: I1209 14:58:50.324145 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cg8hp" podUID="9ae57eec-5514-4d0f-8d41-d78aceca7255" containerName="registry-server" probeResult="failure" output=< Dec 09 14:58:50 crc kubenswrapper[5107]: timeout: failed to connect service ":50051" within 1s Dec 09 14:58:50 crc kubenswrapper[5107]: > Dec 09 14:58:51 crc kubenswrapper[5107]: I1209 14:58:51.397632 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 09 14:58:51 crc kubenswrapper[5107]: I1209 14:58:51.398360 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0b29ecdf-6004-475e-8bcb-5fffa678a02b" containerName="kube-multus-additional-cni-plugins" Dec 09 14:58:51 crc kubenswrapper[5107]: I1209 14:58:51.398379 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b29ecdf-6004-475e-8bcb-5fffa678a02b" containerName="kube-multus-additional-cni-plugins" Dec 09 14:58:51 crc kubenswrapper[5107]: I1209 14:58:51.398402 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8d0b0c00-6091-44b7-a8f0-f0cc529e897a" containerName="pruner" Dec 09 14:58:51 crc kubenswrapper[5107]: I1209 14:58:51.398409 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d0b0c00-6091-44b7-a8f0-f0cc529e897a" containerName="pruner" Dec 09 14:58:51 crc kubenswrapper[5107]: I1209 14:58:51.398439 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f4d03c67-d11d-4a22-aa4e-10cc47dddbef" containerName="pruner" Dec 09 14:58:51 crc kubenswrapper[5107]: I1209 14:58:51.398447 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4d03c67-d11d-4a22-aa4e-10cc47dddbef" containerName="pruner" Dec 09 14:58:51 crc kubenswrapper[5107]: I1209 14:58:51.398569 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="8d0b0c00-6091-44b7-a8f0-f0cc529e897a" containerName="pruner" Dec 09 14:58:51 crc kubenswrapper[5107]: I1209 14:58:51.398586 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="0b29ecdf-6004-475e-8bcb-5fffa678a02b" containerName="kube-multus-additional-cni-plugins" Dec 09 14:58:51 crc kubenswrapper[5107]: I1209 14:58:51.398598 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="f4d03c67-d11d-4a22-aa4e-10cc47dddbef" containerName="pruner" Dec 09 14:58:51 crc kubenswrapper[5107]: I1209 14:58:51.793226 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 09 14:58:51 crc kubenswrapper[5107]: I1209 14:58:51.793403 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 09 14:58:51 crc kubenswrapper[5107]: I1209 14:58:51.796162 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 09 14:58:51 crc kubenswrapper[5107]: I1209 14:58:51.799950 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 09 14:58:51 crc kubenswrapper[5107]: I1209 14:58:51.870686 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04c5f8f8-4f47-47bb-af7a-24f66eb3f9f1-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"04c5f8f8-4f47-47bb-af7a-24f66eb3f9f1\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 09 14:58:51 crc kubenswrapper[5107]: I1209 14:58:51.870920 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/04c5f8f8-4f47-47bb-af7a-24f66eb3f9f1-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"04c5f8f8-4f47-47bb-af7a-24f66eb3f9f1\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 09 14:58:51 crc kubenswrapper[5107]: I1209 14:58:51.972718 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04c5f8f8-4f47-47bb-af7a-24f66eb3f9f1-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"04c5f8f8-4f47-47bb-af7a-24f66eb3f9f1\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 09 14:58:51 crc kubenswrapper[5107]: I1209 14:58:51.972827 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/04c5f8f8-4f47-47bb-af7a-24f66eb3f9f1-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"04c5f8f8-4f47-47bb-af7a-24f66eb3f9f1\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 09 14:58:51 crc kubenswrapper[5107]: I1209 14:58:51.973005 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/04c5f8f8-4f47-47bb-af7a-24f66eb3f9f1-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"04c5f8f8-4f47-47bb-af7a-24f66eb3f9f1\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 09 14:58:51 crc kubenswrapper[5107]: I1209 14:58:51.993654 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04c5f8f8-4f47-47bb-af7a-24f66eb3f9f1-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"04c5f8f8-4f47-47bb-af7a-24f66eb3f9f1\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 09 14:58:52 crc kubenswrapper[5107]: I1209 14:58:52.120232 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 09 14:58:52 crc kubenswrapper[5107]: I1209 14:58:52.329386 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 09 14:58:52 crc kubenswrapper[5107]: I1209 14:58:52.431748 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"04c5f8f8-4f47-47bb-af7a-24f66eb3f9f1","Type":"ContainerStarted","Data":"aba24f595b1372b0aea3813d8b9a8fca8440742d4973e54d45c62ef14814e819"} Dec 09 14:58:52 crc kubenswrapper[5107]: I1209 14:58:52.740125 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-z7hcq" Dec 09 14:58:52 crc kubenswrapper[5107]: I1209 14:58:52.740716 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z7hcq" Dec 09 14:58:52 crc kubenswrapper[5107]: I1209 14:58:52.901247 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z7hcq" Dec 09 14:58:52 crc kubenswrapper[5107]: I1209 14:58:52.990741 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vmk4n" Dec 09 14:58:52 crc kubenswrapper[5107]: I1209 14:58:52.990788 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-vmk4n" Dec 09 14:58:53 crc kubenswrapper[5107]: I1209 14:58:53.038782 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vmk4n" Dec 09 14:58:53 crc kubenswrapper[5107]: I1209 14:58:53.181118 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-lcv6m" Dec 09 14:58:53 crc kubenswrapper[5107]: I1209 14:58:53.181370 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lcv6m" Dec 09 14:58:53 crc kubenswrapper[5107]: I1209 14:58:53.229181 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lcv6m" Dec 09 14:58:53 crc kubenswrapper[5107]: I1209 14:58:53.577845 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-2zc87" Dec 09 14:58:53 crc kubenswrapper[5107]: I1209 14:58:53.577933 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2zc87" Dec 09 14:58:53 crc kubenswrapper[5107]: I1209 14:58:53.652303 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2zc87" Dec 09 14:58:53 crc kubenswrapper[5107]: I1209 14:58:53.719965 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vmk4n" Dec 09 14:58:53 crc kubenswrapper[5107]: I1209 14:58:53.737612 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2zc87" Dec 09 14:58:53 crc kubenswrapper[5107]: I1209 14:58:53.781023 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z7hcq" Dec 09 14:58:53 crc kubenswrapper[5107]: I1209 14:58:53.953358 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-plgtd"] Dec 09 14:58:54 crc kubenswrapper[5107]: I1209 14:58:54.026565 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lcv6m" Dec 09 14:58:54 crc kubenswrapper[5107]: I1209 14:58:54.031576 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:58:54 crc kubenswrapper[5107]: I1209 14:58:54.907909 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rnpmv" Dec 09 14:58:54 crc kubenswrapper[5107]: I1209 14:58:54.907970 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-rnpmv" Dec 09 14:58:54 crc kubenswrapper[5107]: I1209 14:58:54.948376 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rnpmv" Dec 09 14:58:55 crc kubenswrapper[5107]: I1209 14:58:55.216161 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gwhqf" Dec 09 14:58:55 crc kubenswrapper[5107]: I1209 14:58:55.216224 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-gwhqf" Dec 09 14:58:55 crc kubenswrapper[5107]: I1209 14:58:55.265216 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gwhqf" Dec 09 14:58:55 crc kubenswrapper[5107]: I1209 14:58:55.292220 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lcv6m"] Dec 09 14:58:55 crc kubenswrapper[5107]: I1209 14:58:55.453229 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"04c5f8f8-4f47-47bb-af7a-24f66eb3f9f1","Type":"ContainerStarted","Data":"4d0e51953683ec12fb06368565dff250fec21e3aa6bbc871f89d0fbc828051da"} Dec 09 14:58:55 crc kubenswrapper[5107]: I1209 14:58:55.468470 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-12-crc" podStartSLOduration=4.468452431 podStartE2EDuration="4.468452431s" podCreationTimestamp="2025-12-09 14:58:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:58:55.466084777 +0000 UTC m=+183.189789666" watchObservedRunningTime="2025-12-09 14:58:55.468452431 +0000 UTC m=+183.192157330" Dec 09 14:58:55 crc kubenswrapper[5107]: I1209 14:58:55.492694 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gwhqf" Dec 09 14:58:55 crc kubenswrapper[5107]: I1209 14:58:55.500501 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rnpmv" Dec 09 14:58:55 crc kubenswrapper[5107]: I1209 14:58:55.890046 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2zc87"] Dec 09 14:58:56 crc kubenswrapper[5107]: I1209 14:58:56.459278 5107 generic.go:358] "Generic (PLEG): container finished" podID="04c5f8f8-4f47-47bb-af7a-24f66eb3f9f1" containerID="4d0e51953683ec12fb06368565dff250fec21e3aa6bbc871f89d0fbc828051da" exitCode=0 Dec 09 14:58:56 crc kubenswrapper[5107]: I1209 14:58:56.459395 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"04c5f8f8-4f47-47bb-af7a-24f66eb3f9f1","Type":"ContainerDied","Data":"4d0e51953683ec12fb06368565dff250fec21e3aa6bbc871f89d0fbc828051da"} Dec 09 14:58:56 crc kubenswrapper[5107]: I1209 14:58:56.470711 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lcv6m" podUID="6a7f2680-23c6-4334-a9dd-c4328ea41821" containerName="registry-server" containerID="cri-o://453d4341913a9a69563d38214e9654c1e4f5bdf5868caaf2d6ab1256d8594905" gracePeriod=2 Dec 09 14:58:56 crc kubenswrapper[5107]: I1209 14:58:56.472061 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2zc87" podUID="7c53c28b-bc39-454a-ad61-1de7109f45ee" containerName="registry-server" containerID="cri-o://17f64405ec1e3e6f73af547c0e0b4d010e43f22e19107442682003908b15f1d5" gracePeriod=2 Dec 09 14:58:57 crc kubenswrapper[5107]: I1209 14:58:57.218842 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-jx4fv" Dec 09 14:58:57 crc kubenswrapper[5107]: I1209 14:58:57.219238 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jx4fv" Dec 09 14:58:57 crc kubenswrapper[5107]: I1209 14:58:57.269096 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jx4fv" Dec 09 14:58:57 crc kubenswrapper[5107]: I1209 14:58:57.466269 5107 generic.go:358] "Generic (PLEG): container finished" podID="6a7f2680-23c6-4334-a9dd-c4328ea41821" containerID="453d4341913a9a69563d38214e9654c1e4f5bdf5868caaf2d6ab1256d8594905" exitCode=0 Dec 09 14:58:57 crc kubenswrapper[5107]: I1209 14:58:57.466409 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lcv6m" event={"ID":"6a7f2680-23c6-4334-a9dd-c4328ea41821","Type":"ContainerDied","Data":"453d4341913a9a69563d38214e9654c1e4f5bdf5868caaf2d6ab1256d8594905"} Dec 09 14:58:57 crc kubenswrapper[5107]: I1209 14:58:57.469313 5107 generic.go:358] "Generic (PLEG): container finished" podID="7c53c28b-bc39-454a-ad61-1de7109f45ee" containerID="17f64405ec1e3e6f73af547c0e0b4d010e43f22e19107442682003908b15f1d5" exitCode=0 Dec 09 14:58:57 crc kubenswrapper[5107]: I1209 14:58:57.469358 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2zc87" event={"ID":"7c53c28b-bc39-454a-ad61-1de7109f45ee","Type":"ContainerDied","Data":"17f64405ec1e3e6f73af547c0e0b4d010e43f22e19107442682003908b15f1d5"} Dec 09 14:58:57 crc kubenswrapper[5107]: I1209 14:58:57.513545 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jx4fv" Dec 09 14:58:57 crc kubenswrapper[5107]: I1209 14:58:57.695899 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gwhqf"] Dec 09 14:58:57 crc kubenswrapper[5107]: I1209 14:58:57.696251 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gwhqf" podUID="ac4fe46c-6340-4469-947f-e6e295650a97" containerName="registry-server" containerID="cri-o://4867d9b6c21dbfd9b30e9fdc488b524562098164e70fddca55717745589514fa" gracePeriod=2 Dec 09 14:58:57 crc kubenswrapper[5107]: I1209 14:58:57.760510 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 09 14:58:57 crc kubenswrapper[5107]: I1209 14:58:57.905791 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/04c5f8f8-4f47-47bb-af7a-24f66eb3f9f1-kubelet-dir\") pod \"04c5f8f8-4f47-47bb-af7a-24f66eb3f9f1\" (UID: \"04c5f8f8-4f47-47bb-af7a-24f66eb3f9f1\") " Dec 09 14:58:57 crc kubenswrapper[5107]: I1209 14:58:57.905938 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04c5f8f8-4f47-47bb-af7a-24f66eb3f9f1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "04c5f8f8-4f47-47bb-af7a-24f66eb3f9f1" (UID: "04c5f8f8-4f47-47bb-af7a-24f66eb3f9f1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:58:57 crc kubenswrapper[5107]: I1209 14:58:57.905991 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04c5f8f8-4f47-47bb-af7a-24f66eb3f9f1-kube-api-access\") pod \"04c5f8f8-4f47-47bb-af7a-24f66eb3f9f1\" (UID: \"04c5f8f8-4f47-47bb-af7a-24f66eb3f9f1\") " Dec 09 14:58:57 crc kubenswrapper[5107]: I1209 14:58:57.906359 5107 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/04c5f8f8-4f47-47bb-af7a-24f66eb3f9f1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:58:57 crc kubenswrapper[5107]: I1209 14:58:57.915661 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04c5f8f8-4f47-47bb-af7a-24f66eb3f9f1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "04c5f8f8-4f47-47bb-af7a-24f66eb3f9f1" (UID: "04c5f8f8-4f47-47bb-af7a-24f66eb3f9f1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:58:58 crc kubenswrapper[5107]: I1209 14:58:58.008055 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04c5f8f8-4f47-47bb-af7a-24f66eb3f9f1-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 14:58:58 crc kubenswrapper[5107]: I1209 14:58:58.475697 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"04c5f8f8-4f47-47bb-af7a-24f66eb3f9f1","Type":"ContainerDied","Data":"aba24f595b1372b0aea3813d8b9a8fca8440742d4973e54d45c62ef14814e819"} Dec 09 14:58:58 crc kubenswrapper[5107]: I1209 14:58:58.475741 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aba24f595b1372b0aea3813d8b9a8fca8440742d4973e54d45c62ef14814e819" Dec 09 14:58:58 crc kubenswrapper[5107]: I1209 14:58:58.476009 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 09 14:58:58 crc kubenswrapper[5107]: I1209 14:58:58.598934 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cg8hp" Dec 09 14:58:58 crc kubenswrapper[5107]: I1209 14:58:58.648031 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cg8hp" Dec 09 14:58:59 crc kubenswrapper[5107]: I1209 14:58:59.110792 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lcv6m" Dec 09 14:58:59 crc kubenswrapper[5107]: I1209 14:58:59.225484 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a7f2680-23c6-4334-a9dd-c4328ea41821-catalog-content\") pod \"6a7f2680-23c6-4334-a9dd-c4328ea41821\" (UID: \"6a7f2680-23c6-4334-a9dd-c4328ea41821\") " Dec 09 14:58:59 crc kubenswrapper[5107]: I1209 14:58:59.225962 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kl8q8\" (UniqueName: \"kubernetes.io/projected/6a7f2680-23c6-4334-a9dd-c4328ea41821-kube-api-access-kl8q8\") pod \"6a7f2680-23c6-4334-a9dd-c4328ea41821\" (UID: \"6a7f2680-23c6-4334-a9dd-c4328ea41821\") " Dec 09 14:58:59 crc kubenswrapper[5107]: I1209 14:58:59.226144 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a7f2680-23c6-4334-a9dd-c4328ea41821-utilities\") pod \"6a7f2680-23c6-4334-a9dd-c4328ea41821\" (UID: \"6a7f2680-23c6-4334-a9dd-c4328ea41821\") " Dec 09 14:58:59 crc kubenswrapper[5107]: I1209 14:58:59.227062 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a7f2680-23c6-4334-a9dd-c4328ea41821-utilities" (OuterVolumeSpecName: "utilities") pod "6a7f2680-23c6-4334-a9dd-c4328ea41821" (UID: "6a7f2680-23c6-4334-a9dd-c4328ea41821"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:58:59 crc kubenswrapper[5107]: I1209 14:58:59.236005 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a7f2680-23c6-4334-a9dd-c4328ea41821-kube-api-access-kl8q8" (OuterVolumeSpecName: "kube-api-access-kl8q8") pod "6a7f2680-23c6-4334-a9dd-c4328ea41821" (UID: "6a7f2680-23c6-4334-a9dd-c4328ea41821"). InnerVolumeSpecName "kube-api-access-kl8q8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:58:59 crc kubenswrapper[5107]: I1209 14:58:59.259689 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a7f2680-23c6-4334-a9dd-c4328ea41821-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6a7f2680-23c6-4334-a9dd-c4328ea41821" (UID: "6a7f2680-23c6-4334-a9dd-c4328ea41821"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:58:59 crc kubenswrapper[5107]: I1209 14:58:59.328028 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kl8q8\" (UniqueName: \"kubernetes.io/projected/6a7f2680-23c6-4334-a9dd-c4328ea41821-kube-api-access-kl8q8\") on node \"crc\" DevicePath \"\"" Dec 09 14:58:59 crc kubenswrapper[5107]: I1209 14:58:59.328062 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a7f2680-23c6-4334-a9dd-c4328ea41821-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:58:59 crc kubenswrapper[5107]: I1209 14:58:59.328072 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a7f2680-23c6-4334-a9dd-c4328ea41821-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:58:59 crc kubenswrapper[5107]: I1209 14:58:59.490678 5107 generic.go:358] "Generic (PLEG): container finished" podID="ac4fe46c-6340-4469-947f-e6e295650a97" containerID="4867d9b6c21dbfd9b30e9fdc488b524562098164e70fddca55717745589514fa" exitCode=0 Dec 09 14:58:59 crc kubenswrapper[5107]: I1209 14:58:59.490744 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gwhqf" event={"ID":"ac4fe46c-6340-4469-947f-e6e295650a97","Type":"ContainerDied","Data":"4867d9b6c21dbfd9b30e9fdc488b524562098164e70fddca55717745589514fa"} Dec 09 14:58:59 crc kubenswrapper[5107]: I1209 14:58:59.494539 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lcv6m" Dec 09 14:58:59 crc kubenswrapper[5107]: I1209 14:58:59.494546 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lcv6m" event={"ID":"6a7f2680-23c6-4334-a9dd-c4328ea41821","Type":"ContainerDied","Data":"8296b2dd02c82c11378b60288705136ad16de937ef3a8a379b2500ebdd072468"} Dec 09 14:58:59 crc kubenswrapper[5107]: I1209 14:58:59.494662 5107 scope.go:117] "RemoveContainer" containerID="453d4341913a9a69563d38214e9654c1e4f5bdf5868caaf2d6ab1256d8594905" Dec 09 14:58:59 crc kubenswrapper[5107]: I1209 14:58:59.529411 5107 scope.go:117] "RemoveContainer" containerID="84b616f640066e8765503006713341cc2a63e0f11fb84451a0d7446d29d26d92" Dec 09 14:58:59 crc kubenswrapper[5107]: I1209 14:58:59.535161 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lcv6m"] Dec 09 14:58:59 crc kubenswrapper[5107]: I1209 14:58:59.539564 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lcv6m"] Dec 09 14:58:59 crc kubenswrapper[5107]: I1209 14:58:59.593215 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 09 14:58:59 crc kubenswrapper[5107]: I1209 14:58:59.593840 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="04c5f8f8-4f47-47bb-af7a-24f66eb3f9f1" containerName="pruner" Dec 09 14:58:59 crc kubenswrapper[5107]: I1209 14:58:59.593857 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="04c5f8f8-4f47-47bb-af7a-24f66eb3f9f1" containerName="pruner" Dec 09 14:58:59 crc kubenswrapper[5107]: I1209 14:58:59.593891 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6a7f2680-23c6-4334-a9dd-c4328ea41821" containerName="registry-server" Dec 09 14:58:59 crc kubenswrapper[5107]: I1209 14:58:59.593896 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a7f2680-23c6-4334-a9dd-c4328ea41821" containerName="registry-server" Dec 09 14:58:59 crc kubenswrapper[5107]: I1209 14:58:59.593904 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6a7f2680-23c6-4334-a9dd-c4328ea41821" containerName="extract-utilities" Dec 09 14:58:59 crc kubenswrapper[5107]: I1209 14:58:59.593912 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a7f2680-23c6-4334-a9dd-c4328ea41821" containerName="extract-utilities" Dec 09 14:58:59 crc kubenswrapper[5107]: I1209 14:58:59.593924 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6a7f2680-23c6-4334-a9dd-c4328ea41821" containerName="extract-content" Dec 09 14:58:59 crc kubenswrapper[5107]: I1209 14:58:59.593929 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a7f2680-23c6-4334-a9dd-c4328ea41821" containerName="extract-content" Dec 09 14:58:59 crc kubenswrapper[5107]: I1209 14:58:59.594020 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="04c5f8f8-4f47-47bb-af7a-24f66eb3f9f1" containerName="pruner" Dec 09 14:58:59 crc kubenswrapper[5107]: I1209 14:58:59.594031 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="6a7f2680-23c6-4334-a9dd-c4328ea41821" containerName="registry-server" Dec 09 14:58:59 crc kubenswrapper[5107]: I1209 14:58:59.602064 5107 scope.go:117] "RemoveContainer" containerID="c4b050dbd790417068933eaa7294f41b980d2d6c3d41bbabb069e6ec2947121e" Dec 09 14:58:59 crc kubenswrapper[5107]: I1209 14:58:59.612897 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2zc87" Dec 09 14:58:59 crc kubenswrapper[5107]: I1209 14:58:59.732077 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c53c28b-bc39-454a-ad61-1de7109f45ee-utilities\") pod \"7c53c28b-bc39-454a-ad61-1de7109f45ee\" (UID: \"7c53c28b-bc39-454a-ad61-1de7109f45ee\") " Dec 09 14:58:59 crc kubenswrapper[5107]: I1209 14:58:59.732151 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c53c28b-bc39-454a-ad61-1de7109f45ee-catalog-content\") pod \"7c53c28b-bc39-454a-ad61-1de7109f45ee\" (UID: \"7c53c28b-bc39-454a-ad61-1de7109f45ee\") " Dec 09 14:58:59 crc kubenswrapper[5107]: I1209 14:58:59.732187 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kw8jm\" (UniqueName: \"kubernetes.io/projected/7c53c28b-bc39-454a-ad61-1de7109f45ee-kube-api-access-kw8jm\") pod \"7c53c28b-bc39-454a-ad61-1de7109f45ee\" (UID: \"7c53c28b-bc39-454a-ad61-1de7109f45ee\") " Dec 09 14:58:59 crc kubenswrapper[5107]: I1209 14:58:59.733430 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c53c28b-bc39-454a-ad61-1de7109f45ee-utilities" (OuterVolumeSpecName: "utilities") pod "7c53c28b-bc39-454a-ad61-1de7109f45ee" (UID: "7c53c28b-bc39-454a-ad61-1de7109f45ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:58:59 crc kubenswrapper[5107]: I1209 14:58:59.736087 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c53c28b-bc39-454a-ad61-1de7109f45ee-kube-api-access-kw8jm" (OuterVolumeSpecName: "kube-api-access-kw8jm") pod "7c53c28b-bc39-454a-ad61-1de7109f45ee" (UID: "7c53c28b-bc39-454a-ad61-1de7109f45ee"). InnerVolumeSpecName "kube-api-access-kw8jm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:58:59 crc kubenswrapper[5107]: I1209 14:58:59.778542 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c53c28b-bc39-454a-ad61-1de7109f45ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c53c28b-bc39-454a-ad61-1de7109f45ee" (UID: "7c53c28b-bc39-454a-ad61-1de7109f45ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:58:59 crc kubenswrapper[5107]: I1209 14:58:59.834018 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c53c28b-bc39-454a-ad61-1de7109f45ee-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:58:59 crc kubenswrapper[5107]: I1209 14:58:59.834052 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c53c28b-bc39-454a-ad61-1de7109f45ee-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:58:59 crc kubenswrapper[5107]: I1209 14:58:59.834065 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kw8jm\" (UniqueName: \"kubernetes.io/projected/7c53c28b-bc39-454a-ad61-1de7109f45ee-kube-api-access-kw8jm\") on node \"crc\" DevicePath \"\"" Dec 09 14:59:00 crc kubenswrapper[5107]: I1209 14:59:00.012977 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 09 14:59:00 crc kubenswrapper[5107]: I1209 14:59:00.013206 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 09 14:59:00 crc kubenswrapper[5107]: I1209 14:59:00.016743 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 09 14:59:00 crc kubenswrapper[5107]: I1209 14:59:00.016847 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 09 14:59:00 crc kubenswrapper[5107]: I1209 14:59:00.140509 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d36e4b34-f551-453a-bc2c-6a250acdb84e-kube-api-access\") pod \"installer-12-crc\" (UID: \"d36e4b34-f551-453a-bc2c-6a250acdb84e\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 09 14:59:00 crc kubenswrapper[5107]: I1209 14:59:00.140571 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d36e4b34-f551-453a-bc2c-6a250acdb84e-kubelet-dir\") pod \"installer-12-crc\" (UID: \"d36e4b34-f551-453a-bc2c-6a250acdb84e\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 09 14:59:00 crc kubenswrapper[5107]: I1209 14:59:00.140698 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d36e4b34-f551-453a-bc2c-6a250acdb84e-var-lock\") pod \"installer-12-crc\" (UID: \"d36e4b34-f551-453a-bc2c-6a250acdb84e\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 09 14:59:00 crc kubenswrapper[5107]: I1209 14:59:00.233281 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gwhqf" Dec 09 14:59:00 crc kubenswrapper[5107]: I1209 14:59:00.241776 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d36e4b34-f551-453a-bc2c-6a250acdb84e-var-lock\") pod \"installer-12-crc\" (UID: \"d36e4b34-f551-453a-bc2c-6a250acdb84e\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 09 14:59:00 crc kubenswrapper[5107]: I1209 14:59:00.241853 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d36e4b34-f551-453a-bc2c-6a250acdb84e-kube-api-access\") pod \"installer-12-crc\" (UID: \"d36e4b34-f551-453a-bc2c-6a250acdb84e\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 09 14:59:00 crc kubenswrapper[5107]: I1209 14:59:00.241884 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d36e4b34-f551-453a-bc2c-6a250acdb84e-kubelet-dir\") pod \"installer-12-crc\" (UID: \"d36e4b34-f551-453a-bc2c-6a250acdb84e\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 09 14:59:00 crc kubenswrapper[5107]: I1209 14:59:00.241939 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d36e4b34-f551-453a-bc2c-6a250acdb84e-var-lock\") pod \"installer-12-crc\" (UID: \"d36e4b34-f551-453a-bc2c-6a250acdb84e\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 09 14:59:00 crc kubenswrapper[5107]: I1209 14:59:00.241958 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d36e4b34-f551-453a-bc2c-6a250acdb84e-kubelet-dir\") pod \"installer-12-crc\" (UID: \"d36e4b34-f551-453a-bc2c-6a250acdb84e\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 09 14:59:00 crc kubenswrapper[5107]: I1209 14:59:00.278326 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d36e4b34-f551-453a-bc2c-6a250acdb84e-kube-api-access\") pod \"installer-12-crc\" (UID: \"d36e4b34-f551-453a-bc2c-6a250acdb84e\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 09 14:59:00 crc kubenswrapper[5107]: I1209 14:59:00.330211 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 09 14:59:00 crc kubenswrapper[5107]: I1209 14:59:00.343175 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47x6j\" (UniqueName: \"kubernetes.io/projected/ac4fe46c-6340-4469-947f-e6e295650a97-kube-api-access-47x6j\") pod \"ac4fe46c-6340-4469-947f-e6e295650a97\" (UID: \"ac4fe46c-6340-4469-947f-e6e295650a97\") " Dec 09 14:59:00 crc kubenswrapper[5107]: I1209 14:59:00.343238 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac4fe46c-6340-4469-947f-e6e295650a97-catalog-content\") pod \"ac4fe46c-6340-4469-947f-e6e295650a97\" (UID: \"ac4fe46c-6340-4469-947f-e6e295650a97\") " Dec 09 14:59:00 crc kubenswrapper[5107]: I1209 14:59:00.343358 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac4fe46c-6340-4469-947f-e6e295650a97-utilities\") pod \"ac4fe46c-6340-4469-947f-e6e295650a97\" (UID: \"ac4fe46c-6340-4469-947f-e6e295650a97\") " Dec 09 14:59:00 crc kubenswrapper[5107]: I1209 14:59:00.344314 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac4fe46c-6340-4469-947f-e6e295650a97-utilities" (OuterVolumeSpecName: "utilities") pod "ac4fe46c-6340-4469-947f-e6e295650a97" (UID: "ac4fe46c-6340-4469-947f-e6e295650a97"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:59:00 crc kubenswrapper[5107]: I1209 14:59:00.349953 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac4fe46c-6340-4469-947f-e6e295650a97-kube-api-access-47x6j" (OuterVolumeSpecName: "kube-api-access-47x6j") pod "ac4fe46c-6340-4469-947f-e6e295650a97" (UID: "ac4fe46c-6340-4469-947f-e6e295650a97"). InnerVolumeSpecName "kube-api-access-47x6j". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:59:00 crc kubenswrapper[5107]: I1209 14:59:00.359028 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac4fe46c-6340-4469-947f-e6e295650a97-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac4fe46c-6340-4469-947f-e6e295650a97" (UID: "ac4fe46c-6340-4469-947f-e6e295650a97"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:59:00 crc kubenswrapper[5107]: I1209 14:59:00.446052 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-47x6j\" (UniqueName: \"kubernetes.io/projected/ac4fe46c-6340-4469-947f-e6e295650a97-kube-api-access-47x6j\") on node \"crc\" DevicePath \"\"" Dec 09 14:59:00 crc kubenswrapper[5107]: I1209 14:59:00.446433 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac4fe46c-6340-4469-947f-e6e295650a97-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:59:00 crc kubenswrapper[5107]: I1209 14:59:00.446447 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac4fe46c-6340-4469-947f-e6e295650a97-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:59:00 crc kubenswrapper[5107]: I1209 14:59:00.509502 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2zc87" event={"ID":"7c53c28b-bc39-454a-ad61-1de7109f45ee","Type":"ContainerDied","Data":"e6b0d7e0929c72c9a6cc1a1c25bc72960d514fa9dea6b0804553e886d4156dea"} Dec 09 14:59:00 crc kubenswrapper[5107]: I1209 14:59:00.509569 5107 scope.go:117] "RemoveContainer" containerID="17f64405ec1e3e6f73af547c0e0b4d010e43f22e19107442682003908b15f1d5" Dec 09 14:59:00 crc kubenswrapper[5107]: I1209 14:59:00.509727 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2zc87" Dec 09 14:59:00 crc kubenswrapper[5107]: I1209 14:59:00.525039 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gwhqf" event={"ID":"ac4fe46c-6340-4469-947f-e6e295650a97","Type":"ContainerDied","Data":"93e2e8ccc89a9a375537d0f858f7341cae0d5b14002349644921126c47e13d4c"} Dec 09 14:59:00 crc kubenswrapper[5107]: I1209 14:59:00.530546 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gwhqf" Dec 09 14:59:00 crc kubenswrapper[5107]: I1209 14:59:00.562658 5107 scope.go:117] "RemoveContainer" containerID="615fb757de959794b25709af24cfd1e85f0feb0472afe3da6e8b99b9f85b0a39" Dec 09 14:59:00 crc kubenswrapper[5107]: I1209 14:59:00.563820 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2zc87"] Dec 09 14:59:00 crc kubenswrapper[5107]: I1209 14:59:00.566910 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2zc87"] Dec 09 14:59:00 crc kubenswrapper[5107]: I1209 14:59:00.584005 5107 scope.go:117] "RemoveContainer" containerID="5c97f87909aa19b7edaa86df93b295303e2f01a8585a3310bde0d3316e522043" Dec 09 14:59:00 crc kubenswrapper[5107]: I1209 14:59:00.614141 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gwhqf"] Dec 09 14:59:00 crc kubenswrapper[5107]: I1209 14:59:00.623321 5107 scope.go:117] "RemoveContainer" containerID="4867d9b6c21dbfd9b30e9fdc488b524562098164e70fddca55717745589514fa" Dec 09 14:59:00 crc kubenswrapper[5107]: I1209 14:59:00.623475 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gwhqf"] Dec 09 14:59:00 crc kubenswrapper[5107]: I1209 14:59:00.638549 5107 scope.go:117] "RemoveContainer" containerID="856f0f18b2d352256c6a0b740ad281810f7139569fe7ae48c7b232e8c613c52f" Dec 09 14:59:00 crc kubenswrapper[5107]: I1209 14:59:00.655305 5107 scope.go:117] "RemoveContainer" containerID="aa8e8ab02b52ec90b4f72ab4db6bf4330004c1b7025f9d930d1f71e5653cc3b9" Dec 09 14:59:00 crc kubenswrapper[5107]: I1209 14:59:00.688636 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cg8hp"] Dec 09 14:59:00 crc kubenswrapper[5107]: I1209 14:59:00.688934 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cg8hp" podUID="9ae57eec-5514-4d0f-8d41-d78aceca7255" containerName="registry-server" containerID="cri-o://d83e23a01334b4aaeef1ad7770574c636f7aba8633c3b5e57c5986d1facd1537" gracePeriod=2 Dec 09 14:59:00 crc kubenswrapper[5107]: I1209 14:59:00.736759 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 09 14:59:00 crc kubenswrapper[5107]: W1209 14:59:00.741518 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podd36e4b34_f551_453a_bc2c_6a250acdb84e.slice/crio-6931ac5f0e744494eff16c2f1c76707729ea6622df50ce8fe923c4459e094158 WatchSource:0}: Error finding container 6931ac5f0e744494eff16c2f1c76707729ea6622df50ce8fe923c4459e094158: Status 404 returned error can't find the container with id 6931ac5f0e744494eff16c2f1c76707729ea6622df50ce8fe923c4459e094158 Dec 09 14:59:00 crc kubenswrapper[5107]: I1209 14:59:00.830048 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a7f2680-23c6-4334-a9dd-c4328ea41821" path="/var/lib/kubelet/pods/6a7f2680-23c6-4334-a9dd-c4328ea41821/volumes" Dec 09 14:59:00 crc kubenswrapper[5107]: I1209 14:59:00.830877 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c53c28b-bc39-454a-ad61-1de7109f45ee" path="/var/lib/kubelet/pods/7c53c28b-bc39-454a-ad61-1de7109f45ee/volumes" Dec 09 14:59:00 crc kubenswrapper[5107]: I1209 14:59:00.838362 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac4fe46c-6340-4469-947f-e6e295650a97" path="/var/lib/kubelet/pods/ac4fe46c-6340-4469-947f-e6e295650a97/volumes" Dec 09 14:59:01 crc kubenswrapper[5107]: I1209 14:59:01.047845 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cg8hp" Dec 09 14:59:01 crc kubenswrapper[5107]: I1209 14:59:01.155562 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54lrz\" (UniqueName: \"kubernetes.io/projected/9ae57eec-5514-4d0f-8d41-d78aceca7255-kube-api-access-54lrz\") pod \"9ae57eec-5514-4d0f-8d41-d78aceca7255\" (UID: \"9ae57eec-5514-4d0f-8d41-d78aceca7255\") " Dec 09 14:59:01 crc kubenswrapper[5107]: I1209 14:59:01.155618 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ae57eec-5514-4d0f-8d41-d78aceca7255-catalog-content\") pod \"9ae57eec-5514-4d0f-8d41-d78aceca7255\" (UID: \"9ae57eec-5514-4d0f-8d41-d78aceca7255\") " Dec 09 14:59:01 crc kubenswrapper[5107]: I1209 14:59:01.155639 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ae57eec-5514-4d0f-8d41-d78aceca7255-utilities\") pod \"9ae57eec-5514-4d0f-8d41-d78aceca7255\" (UID: \"9ae57eec-5514-4d0f-8d41-d78aceca7255\") " Dec 09 14:59:01 crc kubenswrapper[5107]: I1209 14:59:01.156988 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ae57eec-5514-4d0f-8d41-d78aceca7255-utilities" (OuterVolumeSpecName: "utilities") pod "9ae57eec-5514-4d0f-8d41-d78aceca7255" (UID: "9ae57eec-5514-4d0f-8d41-d78aceca7255"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:59:01 crc kubenswrapper[5107]: I1209 14:59:01.161845 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ae57eec-5514-4d0f-8d41-d78aceca7255-kube-api-access-54lrz" (OuterVolumeSpecName: "kube-api-access-54lrz") pod "9ae57eec-5514-4d0f-8d41-d78aceca7255" (UID: "9ae57eec-5514-4d0f-8d41-d78aceca7255"). InnerVolumeSpecName "kube-api-access-54lrz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:59:01 crc kubenswrapper[5107]: I1209 14:59:01.257011 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-54lrz\" (UniqueName: \"kubernetes.io/projected/9ae57eec-5514-4d0f-8d41-d78aceca7255-kube-api-access-54lrz\") on node \"crc\" DevicePath \"\"" Dec 09 14:59:01 crc kubenswrapper[5107]: I1209 14:59:01.257050 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ae57eec-5514-4d0f-8d41-d78aceca7255-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:59:01 crc kubenswrapper[5107]: I1209 14:59:01.266148 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ae57eec-5514-4d0f-8d41-d78aceca7255-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ae57eec-5514-4d0f-8d41-d78aceca7255" (UID: "9ae57eec-5514-4d0f-8d41-d78aceca7255"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:59:01 crc kubenswrapper[5107]: I1209 14:59:01.358231 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ae57eec-5514-4d0f-8d41-d78aceca7255-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:59:01 crc kubenswrapper[5107]: I1209 14:59:01.534001 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"d36e4b34-f551-453a-bc2c-6a250acdb84e","Type":"ContainerStarted","Data":"8896d0c106bf886ea663d9c7940fee374e13bbc422a7fd9da36dc723fbaf41f9"} Dec 09 14:59:01 crc kubenswrapper[5107]: I1209 14:59:01.534369 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"d36e4b34-f551-453a-bc2c-6a250acdb84e","Type":"ContainerStarted","Data":"6931ac5f0e744494eff16c2f1c76707729ea6622df50ce8fe923c4459e094158"} Dec 09 14:59:01 crc kubenswrapper[5107]: I1209 14:59:01.539421 5107 generic.go:358] "Generic (PLEG): container finished" podID="9ae57eec-5514-4d0f-8d41-d78aceca7255" containerID="d83e23a01334b4aaeef1ad7770574c636f7aba8633c3b5e57c5986d1facd1537" exitCode=0 Dec 09 14:59:01 crc kubenswrapper[5107]: I1209 14:59:01.539530 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cg8hp" event={"ID":"9ae57eec-5514-4d0f-8d41-d78aceca7255","Type":"ContainerDied","Data":"d83e23a01334b4aaeef1ad7770574c636f7aba8633c3b5e57c5986d1facd1537"} Dec 09 14:59:01 crc kubenswrapper[5107]: I1209 14:59:01.539566 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cg8hp" Dec 09 14:59:01 crc kubenswrapper[5107]: I1209 14:59:01.539612 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cg8hp" event={"ID":"9ae57eec-5514-4d0f-8d41-d78aceca7255","Type":"ContainerDied","Data":"fe5c93b6ab865bc6f036dd7d332cf77ed2f90d68bc143da64eede4cada49eed4"} Dec 09 14:59:01 crc kubenswrapper[5107]: I1209 14:59:01.539644 5107 scope.go:117] "RemoveContainer" containerID="d83e23a01334b4aaeef1ad7770574c636f7aba8633c3b5e57c5986d1facd1537" Dec 09 14:59:01 crc kubenswrapper[5107]: I1209 14:59:01.557897 5107 scope.go:117] "RemoveContainer" containerID="f94391c9d011277ffc10c4542e97c7034e4d548da52ce3de41e88d651411f03a" Dec 09 14:59:01 crc kubenswrapper[5107]: I1209 14:59:01.562831 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=2.562812986 podStartE2EDuration="2.562812986s" podCreationTimestamp="2025-12-09 14:58:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:59:01.561488861 +0000 UTC m=+189.285193750" watchObservedRunningTime="2025-12-09 14:59:01.562812986 +0000 UTC m=+189.286517875" Dec 09 14:59:01 crc kubenswrapper[5107]: I1209 14:59:01.579800 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cg8hp"] Dec 09 14:59:01 crc kubenswrapper[5107]: I1209 14:59:01.583964 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cg8hp"] Dec 09 14:59:01 crc kubenswrapper[5107]: I1209 14:59:01.594321 5107 scope.go:117] "RemoveContainer" containerID="21a4d500256f1a9f86fa9ca9095cf1ee964735ff493c73d6cb145cd226fe59c0" Dec 09 14:59:01 crc kubenswrapper[5107]: I1209 14:59:01.612503 5107 scope.go:117] "RemoveContainer" containerID="d83e23a01334b4aaeef1ad7770574c636f7aba8633c3b5e57c5986d1facd1537" Dec 09 14:59:01 crc kubenswrapper[5107]: E1209 14:59:01.613083 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d83e23a01334b4aaeef1ad7770574c636f7aba8633c3b5e57c5986d1facd1537\": container with ID starting with d83e23a01334b4aaeef1ad7770574c636f7aba8633c3b5e57c5986d1facd1537 not found: ID does not exist" containerID="d83e23a01334b4aaeef1ad7770574c636f7aba8633c3b5e57c5986d1facd1537" Dec 09 14:59:01 crc kubenswrapper[5107]: I1209 14:59:01.613662 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d83e23a01334b4aaeef1ad7770574c636f7aba8633c3b5e57c5986d1facd1537"} err="failed to get container status \"d83e23a01334b4aaeef1ad7770574c636f7aba8633c3b5e57c5986d1facd1537\": rpc error: code = NotFound desc = could not find container \"d83e23a01334b4aaeef1ad7770574c636f7aba8633c3b5e57c5986d1facd1537\": container with ID starting with d83e23a01334b4aaeef1ad7770574c636f7aba8633c3b5e57c5986d1facd1537 not found: ID does not exist" Dec 09 14:59:01 crc kubenswrapper[5107]: I1209 14:59:01.613711 5107 scope.go:117] "RemoveContainer" containerID="f94391c9d011277ffc10c4542e97c7034e4d548da52ce3de41e88d651411f03a" Dec 09 14:59:01 crc kubenswrapper[5107]: E1209 14:59:01.614001 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f94391c9d011277ffc10c4542e97c7034e4d548da52ce3de41e88d651411f03a\": container with ID starting with f94391c9d011277ffc10c4542e97c7034e4d548da52ce3de41e88d651411f03a not found: ID does not exist" containerID="f94391c9d011277ffc10c4542e97c7034e4d548da52ce3de41e88d651411f03a" Dec 09 14:59:01 crc kubenswrapper[5107]: I1209 14:59:01.614029 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f94391c9d011277ffc10c4542e97c7034e4d548da52ce3de41e88d651411f03a"} err="failed to get container status \"f94391c9d011277ffc10c4542e97c7034e4d548da52ce3de41e88d651411f03a\": rpc error: code = NotFound desc = could not find container \"f94391c9d011277ffc10c4542e97c7034e4d548da52ce3de41e88d651411f03a\": container with ID starting with f94391c9d011277ffc10c4542e97c7034e4d548da52ce3de41e88d651411f03a not found: ID does not exist" Dec 09 14:59:01 crc kubenswrapper[5107]: I1209 14:59:01.614044 5107 scope.go:117] "RemoveContainer" containerID="21a4d500256f1a9f86fa9ca9095cf1ee964735ff493c73d6cb145cd226fe59c0" Dec 09 14:59:01 crc kubenswrapper[5107]: E1209 14:59:01.614289 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21a4d500256f1a9f86fa9ca9095cf1ee964735ff493c73d6cb145cd226fe59c0\": container with ID starting with 21a4d500256f1a9f86fa9ca9095cf1ee964735ff493c73d6cb145cd226fe59c0 not found: ID does not exist" containerID="21a4d500256f1a9f86fa9ca9095cf1ee964735ff493c73d6cb145cd226fe59c0" Dec 09 14:59:01 crc kubenswrapper[5107]: I1209 14:59:01.614445 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21a4d500256f1a9f86fa9ca9095cf1ee964735ff493c73d6cb145cd226fe59c0"} err="failed to get container status \"21a4d500256f1a9f86fa9ca9095cf1ee964735ff493c73d6cb145cd226fe59c0\": rpc error: code = NotFound desc = could not find container \"21a4d500256f1a9f86fa9ca9095cf1ee964735ff493c73d6cb145cd226fe59c0\": container with ID starting with 21a4d500256f1a9f86fa9ca9095cf1ee964735ff493c73d6cb145cd226fe59c0 not found: ID does not exist" Dec 09 14:59:02 crc kubenswrapper[5107]: I1209 14:59:02.824882 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ae57eec-5514-4d0f-8d41-d78aceca7255" path="/var/lib/kubelet/pods/9ae57eec-5514-4d0f-8d41-d78aceca7255/volumes" Dec 09 14:59:03 crc kubenswrapper[5107]: I1209 14:59:03.585439 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.001326 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" podUID="834666aa-f503-44df-8377-77c8670167cd" containerName="oauth-openshift" containerID="cri-o://c43abc904f2c7944a368c413fc6fc375e98890e9ab8863a5ee8e0083945441f4" gracePeriod=15 Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.362876 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.395530 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-579c985f4c-bm2wq"] Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.396081 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7c53c28b-bc39-454a-ad61-1de7109f45ee" containerName="registry-server" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.396104 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c53c28b-bc39-454a-ad61-1de7109f45ee" containerName="registry-server" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.396114 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7c53c28b-bc39-454a-ad61-1de7109f45ee" containerName="extract-content" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.396120 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c53c28b-bc39-454a-ad61-1de7109f45ee" containerName="extract-content" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.396128 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9ae57eec-5514-4d0f-8d41-d78aceca7255" containerName="extract-content" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.396134 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ae57eec-5514-4d0f-8d41-d78aceca7255" containerName="extract-content" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.396141 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ac4fe46c-6340-4469-947f-e6e295650a97" containerName="extract-content" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.396146 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac4fe46c-6340-4469-947f-e6e295650a97" containerName="extract-content" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.396158 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9ae57eec-5514-4d0f-8d41-d78aceca7255" containerName="extract-utilities" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.396163 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ae57eec-5514-4d0f-8d41-d78aceca7255" containerName="extract-utilities" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.396173 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ac4fe46c-6340-4469-947f-e6e295650a97" containerName="extract-utilities" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.396178 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac4fe46c-6340-4469-947f-e6e295650a97" containerName="extract-utilities" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.396184 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9ae57eec-5514-4d0f-8d41-d78aceca7255" containerName="registry-server" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.396189 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ae57eec-5514-4d0f-8d41-d78aceca7255" containerName="registry-server" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.396198 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="834666aa-f503-44df-8377-77c8670167cd" containerName="oauth-openshift" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.396203 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="834666aa-f503-44df-8377-77c8670167cd" containerName="oauth-openshift" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.396211 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ac4fe46c-6340-4469-947f-e6e295650a97" containerName="registry-server" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.396217 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac4fe46c-6340-4469-947f-e6e295650a97" containerName="registry-server" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.396239 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7c53c28b-bc39-454a-ad61-1de7109f45ee" containerName="extract-utilities" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.396244 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c53c28b-bc39-454a-ad61-1de7109f45ee" containerName="extract-utilities" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.396326 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="9ae57eec-5514-4d0f-8d41-d78aceca7255" containerName="registry-server" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.396352 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="7c53c28b-bc39-454a-ad61-1de7109f45ee" containerName="registry-server" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.396362 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="834666aa-f503-44df-8377-77c8670167cd" containerName="oauth-openshift" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.396370 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="ac4fe46c-6340-4469-947f-e6e295650a97" containerName="registry-server" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.401826 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.403721 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-579c985f4c-bm2wq"] Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.471980 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8gq7\" (UniqueName: \"kubernetes.io/projected/834666aa-f503-44df-8377-77c8670167cd-kube-api-access-h8gq7\") pod \"834666aa-f503-44df-8377-77c8670167cd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.472071 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-system-service-ca\") pod \"834666aa-f503-44df-8377-77c8670167cd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.472095 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-system-cliconfig\") pod \"834666aa-f503-44df-8377-77c8670167cd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.472133 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-user-template-login\") pod \"834666aa-f503-44df-8377-77c8670167cd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.472163 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-system-ocp-branding-template\") pod \"834666aa-f503-44df-8377-77c8670167cd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.472187 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/834666aa-f503-44df-8377-77c8670167cd-audit-dir\") pod \"834666aa-f503-44df-8377-77c8670167cd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.472212 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-system-router-certs\") pod \"834666aa-f503-44df-8377-77c8670167cd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.472242 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/834666aa-f503-44df-8377-77c8670167cd-audit-policies\") pod \"834666aa-f503-44df-8377-77c8670167cd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.472274 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-system-trusted-ca-bundle\") pod \"834666aa-f503-44df-8377-77c8670167cd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.472303 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-user-template-error\") pod \"834666aa-f503-44df-8377-77c8670167cd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.472378 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-user-template-provider-selection\") pod \"834666aa-f503-44df-8377-77c8670167cd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.472407 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-system-serving-cert\") pod \"834666aa-f503-44df-8377-77c8670167cd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.472439 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-system-session\") pod \"834666aa-f503-44df-8377-77c8670167cd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.472490 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-user-idp-0-file-data\") pod \"834666aa-f503-44df-8377-77c8670167cd\" (UID: \"834666aa-f503-44df-8377-77c8670167cd\") " Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.472588 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lwr5\" (UniqueName: \"kubernetes.io/projected/1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1-kube-api-access-9lwr5\") pod \"oauth-openshift-579c985f4c-bm2wq\" (UID: \"1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1\") " pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.472622 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1-v4-0-config-system-router-certs\") pod \"oauth-openshift-579c985f4c-bm2wq\" (UID: \"1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1\") " pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.472652 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-579c985f4c-bm2wq\" (UID: \"1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1\") " pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.472680 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-579c985f4c-bm2wq\" (UID: \"1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1\") " pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.472726 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-579c985f4c-bm2wq\" (UID: \"1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1\") " pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.472763 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1-v4-0-config-system-service-ca\") pod \"oauth-openshift-579c985f4c-bm2wq\" (UID: \"1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1\") " pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.472797 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-579c985f4c-bm2wq\" (UID: \"1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1\") " pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.472815 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1-v4-0-config-user-template-error\") pod \"oauth-openshift-579c985f4c-bm2wq\" (UID: \"1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1\") " pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.472843 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1-v4-0-config-user-template-login\") pod \"oauth-openshift-579c985f4c-bm2wq\" (UID: \"1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1\") " pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.472882 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1-audit-policies\") pod \"oauth-openshift-579c985f4c-bm2wq\" (UID: \"1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1\") " pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.472907 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1-v4-0-config-system-session\") pod \"oauth-openshift-579c985f4c-bm2wq\" (UID: \"1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1\") " pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.472958 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-579c985f4c-bm2wq\" (UID: \"1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1\") " pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.472981 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-579c985f4c-bm2wq\" (UID: \"1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1\") " pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.472999 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1-audit-dir\") pod \"oauth-openshift-579c985f4c-bm2wq\" (UID: \"1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1\") " pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.473357 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/834666aa-f503-44df-8377-77c8670167cd-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "834666aa-f503-44df-8377-77c8670167cd" (UID: "834666aa-f503-44df-8377-77c8670167cd"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.473642 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "834666aa-f503-44df-8377-77c8670167cd" (UID: "834666aa-f503-44df-8377-77c8670167cd"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.473857 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/834666aa-f503-44df-8377-77c8670167cd-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "834666aa-f503-44df-8377-77c8670167cd" (UID: "834666aa-f503-44df-8377-77c8670167cd"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.474076 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "834666aa-f503-44df-8377-77c8670167cd" (UID: "834666aa-f503-44df-8377-77c8670167cd"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.474190 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "834666aa-f503-44df-8377-77c8670167cd" (UID: "834666aa-f503-44df-8377-77c8670167cd"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.478698 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "834666aa-f503-44df-8377-77c8670167cd" (UID: "834666aa-f503-44df-8377-77c8670167cd"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.478791 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/834666aa-f503-44df-8377-77c8670167cd-kube-api-access-h8gq7" (OuterVolumeSpecName: "kube-api-access-h8gq7") pod "834666aa-f503-44df-8377-77c8670167cd" (UID: "834666aa-f503-44df-8377-77c8670167cd"). InnerVolumeSpecName "kube-api-access-h8gq7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.478971 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "834666aa-f503-44df-8377-77c8670167cd" (UID: "834666aa-f503-44df-8377-77c8670167cd"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.479174 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "834666aa-f503-44df-8377-77c8670167cd" (UID: "834666aa-f503-44df-8377-77c8670167cd"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.479533 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "834666aa-f503-44df-8377-77c8670167cd" (UID: "834666aa-f503-44df-8377-77c8670167cd"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.479672 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "834666aa-f503-44df-8377-77c8670167cd" (UID: "834666aa-f503-44df-8377-77c8670167cd"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.480203 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "834666aa-f503-44df-8377-77c8670167cd" (UID: "834666aa-f503-44df-8377-77c8670167cd"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.480852 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "834666aa-f503-44df-8377-77c8670167cd" (UID: "834666aa-f503-44df-8377-77c8670167cd"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.489463 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "834666aa-f503-44df-8377-77c8670167cd" (UID: "834666aa-f503-44df-8377-77c8670167cd"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.574806 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1-v4-0-config-user-template-login\") pod \"oauth-openshift-579c985f4c-bm2wq\" (UID: \"1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1\") " pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.574888 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1-audit-policies\") pod \"oauth-openshift-579c985f4c-bm2wq\" (UID: \"1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1\") " pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.574925 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1-v4-0-config-system-session\") pod \"oauth-openshift-579c985f4c-bm2wq\" (UID: \"1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1\") " pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.575090 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-579c985f4c-bm2wq\" (UID: \"1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1\") " pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.575132 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-579c985f4c-bm2wq\" (UID: \"1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1\") " pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.575158 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1-audit-dir\") pod \"oauth-openshift-579c985f4c-bm2wq\" (UID: \"1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1\") " pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.575207 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9lwr5\" (UniqueName: \"kubernetes.io/projected/1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1-kube-api-access-9lwr5\") pod \"oauth-openshift-579c985f4c-bm2wq\" (UID: \"1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1\") " pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.575242 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1-v4-0-config-system-router-certs\") pod \"oauth-openshift-579c985f4c-bm2wq\" (UID: \"1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1\") " pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.575284 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-579c985f4c-bm2wq\" (UID: \"1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1\") " pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.575325 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-579c985f4c-bm2wq\" (UID: \"1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1\") " pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.575415 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-579c985f4c-bm2wq\" (UID: \"1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1\") " pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.575475 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1-v4-0-config-system-service-ca\") pod \"oauth-openshift-579c985f4c-bm2wq\" (UID: \"1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1\") " pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.575518 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-579c985f4c-bm2wq\" (UID: \"1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1\") " pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.575549 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1-v4-0-config-user-template-error\") pod \"oauth-openshift-579c985f4c-bm2wq\" (UID: \"1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1\") " pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.575619 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.575654 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.575674 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.575675 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1-audit-dir\") pod \"oauth-openshift-579c985f4c-bm2wq\" (UID: \"1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1\") " pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.575696 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.575732 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.575749 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.575762 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h8gq7\" (UniqueName: \"kubernetes.io/projected/834666aa-f503-44df-8377-77c8670167cd-kube-api-access-h8gq7\") on node \"crc\" DevicePath \"\"" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.575774 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.575783 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.575795 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.575820 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.575832 5107 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/834666aa-f503-44df-8377-77c8670167cd-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.575845 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/834666aa-f503-44df-8377-77c8670167cd-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.575871 5107 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/834666aa-f503-44df-8377-77c8670167cd-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.575954 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1-audit-policies\") pod \"oauth-openshift-579c985f4c-bm2wq\" (UID: \"1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1\") " pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.576498 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1-v4-0-config-system-service-ca\") pod \"oauth-openshift-579c985f4c-bm2wq\" (UID: \"1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1\") " pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.577311 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-579c985f4c-bm2wq\" (UID: \"1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1\") " pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.578697 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-579c985f4c-bm2wq\" (UID: \"1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1\") " pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.579228 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-579c985f4c-bm2wq\" (UID: \"1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1\") " pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.579641 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-579c985f4c-bm2wq\" (UID: \"1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1\") " pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.580119 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-579c985f4c-bm2wq\" (UID: \"1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1\") " pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.580220 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1-v4-0-config-system-session\") pod \"oauth-openshift-579c985f4c-bm2wq\" (UID: \"1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1\") " pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.580240 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-579c985f4c-bm2wq\" (UID: \"1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1\") " pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.580418 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1-v4-0-config-user-template-login\") pod \"oauth-openshift-579c985f4c-bm2wq\" (UID: \"1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1\") " pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.580726 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1-v4-0-config-user-template-error\") pod \"oauth-openshift-579c985f4c-bm2wq\" (UID: \"1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1\") " pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.583325 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1-v4-0-config-system-router-certs\") pod \"oauth-openshift-579c985f4c-bm2wq\" (UID: \"1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1\") " pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.593085 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lwr5\" (UniqueName: \"kubernetes.io/projected/1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1-kube-api-access-9lwr5\") pod \"oauth-openshift-579c985f4c-bm2wq\" (UID: \"1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1\") " pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.718308 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.757081 5107 generic.go:358] "Generic (PLEG): container finished" podID="834666aa-f503-44df-8377-77c8670167cd" containerID="c43abc904f2c7944a368c413fc6fc375e98890e9ab8863a5ee8e0083945441f4" exitCode=0 Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.757278 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" event={"ID":"834666aa-f503-44df-8377-77c8670167cd","Type":"ContainerDied","Data":"c43abc904f2c7944a368c413fc6fc375e98890e9ab8863a5ee8e0083945441f4"} Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.757306 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" event={"ID":"834666aa-f503-44df-8377-77c8670167cd","Type":"ContainerDied","Data":"c345a3444d4deef1e90144b46d6b5c84a184f0c0be4e8d8e02f91bd7e0a0ec6d"} Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.757324 5107 scope.go:117] "RemoveContainer" containerID="c43abc904f2c7944a368c413fc6fc375e98890e9ab8863a5ee8e0083945441f4" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.757503 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-plgtd" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.783921 5107 scope.go:117] "RemoveContainer" containerID="c43abc904f2c7944a368c413fc6fc375e98890e9ab8863a5ee8e0083945441f4" Dec 09 14:59:19 crc kubenswrapper[5107]: E1209 14:59:19.784554 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c43abc904f2c7944a368c413fc6fc375e98890e9ab8863a5ee8e0083945441f4\": container with ID starting with c43abc904f2c7944a368c413fc6fc375e98890e9ab8863a5ee8e0083945441f4 not found: ID does not exist" containerID="c43abc904f2c7944a368c413fc6fc375e98890e9ab8863a5ee8e0083945441f4" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.784595 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c43abc904f2c7944a368c413fc6fc375e98890e9ab8863a5ee8e0083945441f4"} err="failed to get container status \"c43abc904f2c7944a368c413fc6fc375e98890e9ab8863a5ee8e0083945441f4\": rpc error: code = NotFound desc = could not find container \"c43abc904f2c7944a368c413fc6fc375e98890e9ab8863a5ee8e0083945441f4\": container with ID starting with c43abc904f2c7944a368c413fc6fc375e98890e9ab8863a5ee8e0083945441f4 not found: ID does not exist" Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.788082 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-plgtd"] Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.792152 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-plgtd"] Dec 09 14:59:19 crc kubenswrapper[5107]: I1209 14:59:19.903828 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-579c985f4c-bm2wq"] Dec 09 14:59:20 crc kubenswrapper[5107]: I1209 14:59:20.766637 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" event={"ID":"1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1","Type":"ContainerStarted","Data":"3151bc329305af0935d481675d38a02bb9594d9906ae5aa306b97c1b3cb21723"} Dec 09 14:59:20 crc kubenswrapper[5107]: I1209 14:59:20.767033 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" event={"ID":"1c237b2d-b6a8-4c1c-bdba-ad42c42ebcb1","Type":"ContainerStarted","Data":"4a8cd4ac407e6bddf3315b8dc9ced364c9714e8c6c4638e529287ed65c82e24a"} Dec 09 14:59:20 crc kubenswrapper[5107]: I1209 14:59:20.767787 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" Dec 09 14:59:20 crc kubenswrapper[5107]: I1209 14:59:20.775315 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" Dec 09 14:59:20 crc kubenswrapper[5107]: I1209 14:59:20.821442 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-579c985f4c-bm2wq" podStartSLOduration=27.821425212 podStartE2EDuration="27.821425212s" podCreationTimestamp="2025-12-09 14:58:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:59:20.79109389 +0000 UTC m=+208.514798829" watchObservedRunningTime="2025-12-09 14:59:20.821425212 +0000 UTC m=+208.545130101" Dec 09 14:59:20 crc kubenswrapper[5107]: I1209 14:59:20.855318 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="834666aa-f503-44df-8377-77c8670167cd" path="/var/lib/kubelet/pods/834666aa-f503-44df-8377-77c8670167cd/volumes" Dec 09 14:59:28 crc kubenswrapper[5107]: I1209 14:59:28.241977 5107 ???:1] "http: TLS handshake error from 192.168.126.11:48796: no serving certificate available for the kubelet" Dec 09 14:59:38 crc kubenswrapper[5107]: I1209 14:59:38.960829 5107 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 09 14:59:38 crc kubenswrapper[5107]: I1209 14:59:38.992922 5107 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 09 14:59:38 crc kubenswrapper[5107]: I1209 14:59:38.992990 5107 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 09 14:59:38 crc kubenswrapper[5107]: I1209 14:59:38.993158 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:59:38 crc kubenswrapper[5107]: I1209 14:59:38.993770 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://a7a842582313d50b3a1310b8e3656dc938751f1b63c38bd4eef8a2b6fae1c928" gracePeriod=15 Dec 09 14:59:38 crc kubenswrapper[5107]: I1209 14:59:38.993906 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 09 14:59:38 crc kubenswrapper[5107]: I1209 14:59:38.993926 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 09 14:59:38 crc kubenswrapper[5107]: I1209 14:59:38.993903 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://bdd7c59df86281d094a2a2e4476c5b9b6871fbd53bdb2c63b08a1822f3692aed" gracePeriod=15 Dec 09 14:59:38 crc kubenswrapper[5107]: I1209 14:59:38.993962 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://8ff8ee8fa9bdc13d78b24a7ed20b77f712c6d70019508ed464dea1e799dfc607" gracePeriod=15 Dec 09 14:59:38 crc kubenswrapper[5107]: I1209 14:59:38.994004 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://96cc1e2c0cd6007e9aa65ba5a09171100f571ede67492ed57351db6ae05db1fb" gracePeriod=15 Dec 09 14:59:38 crc kubenswrapper[5107]: I1209 14:59:38.993936 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 09 14:59:38 crc kubenswrapper[5107]: I1209 14:59:38.994104 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 09 14:59:38 crc kubenswrapper[5107]: I1209 14:59:38.994112 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 09 14:59:38 crc kubenswrapper[5107]: I1209 14:59:38.994120 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 09 14:59:38 crc kubenswrapper[5107]: I1209 14:59:38.994137 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Dec 09 14:59:38 crc kubenswrapper[5107]: I1209 14:59:38.994145 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Dec 09 14:59:38 crc kubenswrapper[5107]: I1209 14:59:38.994155 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 09 14:59:38 crc kubenswrapper[5107]: I1209 14:59:38.994161 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 09 14:59:38 crc kubenswrapper[5107]: I1209 14:59:38.994178 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 09 14:59:38 crc kubenswrapper[5107]: I1209 14:59:38.994189 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 09 14:59:38 crc kubenswrapper[5107]: I1209 14:59:38.994203 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 09 14:59:38 crc kubenswrapper[5107]: I1209 14:59:38.994210 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 09 14:59:38 crc kubenswrapper[5107]: I1209 14:59:38.993905 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://169d737323fa61136fa0b652a15316035559871415b44a78987aa3ba6b800da0" gracePeriod=15 Dec 09 14:59:38 crc kubenswrapper[5107]: I1209 14:59:38.994222 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 09 14:59:38 crc kubenswrapper[5107]: I1209 14:59:38.994282 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 09 14:59:38 crc kubenswrapper[5107]: I1209 14:59:38.994480 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 09 14:59:38 crc kubenswrapper[5107]: I1209 14:59:38.994493 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 09 14:59:38 crc kubenswrapper[5107]: I1209 14:59:38.994502 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 09 14:59:38 crc kubenswrapper[5107]: I1209 14:59:38.994514 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 09 14:59:38 crc kubenswrapper[5107]: I1209 14:59:38.994521 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 09 14:59:38 crc kubenswrapper[5107]: I1209 14:59:38.994544 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 09 14:59:38 crc kubenswrapper[5107]: I1209 14:59:38.994555 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 09 14:59:38 crc kubenswrapper[5107]: I1209 14:59:38.994670 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 09 14:59:38 crc kubenswrapper[5107]: I1209 14:59:38.994678 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 09 14:59:38 crc kubenswrapper[5107]: I1209 14:59:38.994687 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 09 14:59:38 crc kubenswrapper[5107]: I1209 14:59:38.994693 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 09 14:59:38 crc kubenswrapper[5107]: I1209 14:59:38.994793 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 09 14:59:38 crc kubenswrapper[5107]: I1209 14:59:38.994803 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.001325 5107 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="3a14caf222afb62aaabdc47808b6f944" podUID="57755cc5f99000cc11e193051474d4e2" Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.012726 5107 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.030486 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.045709 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.045766 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.045831 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.045878 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.045919 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.045950 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.045983 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.046013 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.046038 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.046141 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.147642 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.148125 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.148155 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.148225 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.148266 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.148287 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.148305 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.148326 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.148370 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.148412 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.149040 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.147803 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.149096 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.149137 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.149105 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.149137 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.149111 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.149169 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.149221 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.149538 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.323787 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:59:39 crc kubenswrapper[5107]: W1209 14:59:39.349218 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7dbc7e1ee9c187a863ef9b473fad27b.slice/crio-df8823ef328b4daf74274f4038ef7f3c1c5aa3b23233ac06cda0b6b8192949d4 WatchSource:0}: Error finding container df8823ef328b4daf74274f4038ef7f3c1c5aa3b23233ac06cda0b6b8192949d4: Status 404 returned error can't find the container with id df8823ef328b4daf74274f4038ef7f3c1c5aa3b23233ac06cda0b6b8192949d4 Dec 09 14:59:39 crc kubenswrapper[5107]: E1209 14:59:39.354166 5107 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.163:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187f9410ec06df4a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:59:39.352784714 +0000 UTC m=+227.076489613,LastTimestamp:2025-12-09 14:59:39.352784714 +0000 UTC m=+227.076489613,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.892125 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.894580 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.895753 5107 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="96cc1e2c0cd6007e9aa65ba5a09171100f571ede67492ed57351db6ae05db1fb" exitCode=0 Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.895806 5107 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="bdd7c59df86281d094a2a2e4476c5b9b6871fbd53bdb2c63b08a1822f3692aed" exitCode=0 Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.895823 5107 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="169d737323fa61136fa0b652a15316035559871415b44a78987aa3ba6b800da0" exitCode=0 Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.895839 5107 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="8ff8ee8fa9bdc13d78b24a7ed20b77f712c6d70019508ed464dea1e799dfc607" exitCode=2 Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.895912 5107 scope.go:117] "RemoveContainer" containerID="dcfe82d78bf385817437f96e0cd90ec7f0e152520e5d2289ea61ded6ec88972e" Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.898716 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"09a73ce7d54219acfad888d879b4d923aeccef63c91f02b44f7f8cb7e2582842"} Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.898769 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"df8823ef328b4daf74274f4038ef7f3c1c5aa3b23233ac06cda0b6b8192949d4"} Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.899573 5107 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.902155 5107 generic.go:358] "Generic (PLEG): container finished" podID="d36e4b34-f551-453a-bc2c-6a250acdb84e" containerID="8896d0c106bf886ea663d9c7940fee374e13bbc422a7fd9da36dc723fbaf41f9" exitCode=0 Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.902255 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"d36e4b34-f551-453a-bc2c-6a250acdb84e","Type":"ContainerDied","Data":"8896d0c106bf886ea663d9c7940fee374e13bbc422a7fd9da36dc723fbaf41f9"} Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.903290 5107 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 09 14:59:39 crc kubenswrapper[5107]: I1209 14:59:39.903813 5107 status_manager.go:895] "Failed to get status for pod" podUID="d36e4b34-f551-453a-bc2c-6a250acdb84e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 09 14:59:40 crc kubenswrapper[5107]: I1209 14:59:40.913511 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.182565 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.184085 5107 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.184399 5107 status_manager.go:895] "Failed to get status for pod" podUID="d36e4b34-f551-453a-bc2c-6a250acdb84e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.283934 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d36e4b34-f551-453a-bc2c-6a250acdb84e-kube-api-access\") pod \"d36e4b34-f551-453a-bc2c-6a250acdb84e\" (UID: \"d36e4b34-f551-453a-bc2c-6a250acdb84e\") " Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.284153 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d36e4b34-f551-453a-bc2c-6a250acdb84e-kubelet-dir\") pod \"d36e4b34-f551-453a-bc2c-6a250acdb84e\" (UID: \"d36e4b34-f551-453a-bc2c-6a250acdb84e\") " Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.284187 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d36e4b34-f551-453a-bc2c-6a250acdb84e-var-lock\") pod \"d36e4b34-f551-453a-bc2c-6a250acdb84e\" (UID: \"d36e4b34-f551-453a-bc2c-6a250acdb84e\") " Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.284216 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d36e4b34-f551-453a-bc2c-6a250acdb84e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d36e4b34-f551-453a-bc2c-6a250acdb84e" (UID: "d36e4b34-f551-453a-bc2c-6a250acdb84e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.284238 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d36e4b34-f551-453a-bc2c-6a250acdb84e-var-lock" (OuterVolumeSpecName: "var-lock") pod "d36e4b34-f551-453a-bc2c-6a250acdb84e" (UID: "d36e4b34-f551-453a-bc2c-6a250acdb84e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.284426 5107 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d36e4b34-f551-453a-bc2c-6a250acdb84e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.284446 5107 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d36e4b34-f551-453a-bc2c-6a250acdb84e-var-lock\") on node \"crc\" DevicePath \"\"" Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.293157 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d36e4b34-f551-453a-bc2c-6a250acdb84e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d36e4b34-f551-453a-bc2c-6a250acdb84e" (UID: "d36e4b34-f551-453a-bc2c-6a250acdb84e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.386150 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d36e4b34-f551-453a-bc2c-6a250acdb84e-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.409775 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.410712 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.411433 5107 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.411836 5107 status_manager.go:895] "Failed to get status for pod" podUID="d36e4b34-f551-453a-bc2c-6a250acdb84e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.412425 5107 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.487199 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.487310 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.487366 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.487401 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.487420 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.487439 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.487449 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.487503 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.487713 5107 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.487727 5107 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.487737 5107 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.488310 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.491606 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.589701 5107 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.589756 5107 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.921529 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"d36e4b34-f551-453a-bc2c-6a250acdb84e","Type":"ContainerDied","Data":"6931ac5f0e744494eff16c2f1c76707729ea6622df50ce8fe923c4459e094158"} Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.921914 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6931ac5f0e744494eff16c2f1c76707729ea6622df50ce8fe923c4459e094158" Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.921571 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.925099 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.937944 5107 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="a7a842582313d50b3a1310b8e3656dc938751f1b63c38bd4eef8a2b6fae1c928" exitCode=0 Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.938043 5107 scope.go:117] "RemoveContainer" containerID="96cc1e2c0cd6007e9aa65ba5a09171100f571ede67492ed57351db6ae05db1fb" Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.938299 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.960317 5107 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.960642 5107 status_manager.go:895] "Failed to get status for pod" podUID="d36e4b34-f551-453a-bc2c-6a250acdb84e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.961024 5107 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.961434 5107 status_manager.go:895] "Failed to get status for pod" podUID="d36e4b34-f551-453a-bc2c-6a250acdb84e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.961656 5107 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.962079 5107 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.963565 5107 scope.go:117] "RemoveContainer" containerID="bdd7c59df86281d094a2a2e4476c5b9b6871fbd53bdb2c63b08a1822f3692aed" Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.979547 5107 scope.go:117] "RemoveContainer" containerID="169d737323fa61136fa0b652a15316035559871415b44a78987aa3ba6b800da0" Dec 09 14:59:41 crc kubenswrapper[5107]: I1209 14:59:41.998101 5107 scope.go:117] "RemoveContainer" containerID="8ff8ee8fa9bdc13d78b24a7ed20b77f712c6d70019508ed464dea1e799dfc607" Dec 09 14:59:42 crc kubenswrapper[5107]: I1209 14:59:42.019663 5107 scope.go:117] "RemoveContainer" containerID="a7a842582313d50b3a1310b8e3656dc938751f1b63c38bd4eef8a2b6fae1c928" Dec 09 14:59:42 crc kubenswrapper[5107]: I1209 14:59:42.039430 5107 scope.go:117] "RemoveContainer" containerID="68a813d217d22a3cbcc2432609817ec96bf285d26990afb1a4552cae8b0ffd2b" Dec 09 14:59:42 crc kubenswrapper[5107]: I1209 14:59:42.112707 5107 scope.go:117] "RemoveContainer" containerID="96cc1e2c0cd6007e9aa65ba5a09171100f571ede67492ed57351db6ae05db1fb" Dec 09 14:59:42 crc kubenswrapper[5107]: E1209 14:59:42.115825 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96cc1e2c0cd6007e9aa65ba5a09171100f571ede67492ed57351db6ae05db1fb\": container with ID starting with 96cc1e2c0cd6007e9aa65ba5a09171100f571ede67492ed57351db6ae05db1fb not found: ID does not exist" containerID="96cc1e2c0cd6007e9aa65ba5a09171100f571ede67492ed57351db6ae05db1fb" Dec 09 14:59:42 crc kubenswrapper[5107]: I1209 14:59:42.115945 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96cc1e2c0cd6007e9aa65ba5a09171100f571ede67492ed57351db6ae05db1fb"} err="failed to get container status \"96cc1e2c0cd6007e9aa65ba5a09171100f571ede67492ed57351db6ae05db1fb\": rpc error: code = NotFound desc = could not find container \"96cc1e2c0cd6007e9aa65ba5a09171100f571ede67492ed57351db6ae05db1fb\": container with ID starting with 96cc1e2c0cd6007e9aa65ba5a09171100f571ede67492ed57351db6ae05db1fb not found: ID does not exist" Dec 09 14:59:42 crc kubenswrapper[5107]: I1209 14:59:42.116033 5107 scope.go:117] "RemoveContainer" containerID="bdd7c59df86281d094a2a2e4476c5b9b6871fbd53bdb2c63b08a1822f3692aed" Dec 09 14:59:42 crc kubenswrapper[5107]: E1209 14:59:42.119327 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdd7c59df86281d094a2a2e4476c5b9b6871fbd53bdb2c63b08a1822f3692aed\": container with ID starting with bdd7c59df86281d094a2a2e4476c5b9b6871fbd53bdb2c63b08a1822f3692aed not found: ID does not exist" containerID="bdd7c59df86281d094a2a2e4476c5b9b6871fbd53bdb2c63b08a1822f3692aed" Dec 09 14:59:42 crc kubenswrapper[5107]: I1209 14:59:42.119398 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdd7c59df86281d094a2a2e4476c5b9b6871fbd53bdb2c63b08a1822f3692aed"} err="failed to get container status \"bdd7c59df86281d094a2a2e4476c5b9b6871fbd53bdb2c63b08a1822f3692aed\": rpc error: code = NotFound desc = could not find container \"bdd7c59df86281d094a2a2e4476c5b9b6871fbd53bdb2c63b08a1822f3692aed\": container with ID starting with bdd7c59df86281d094a2a2e4476c5b9b6871fbd53bdb2c63b08a1822f3692aed not found: ID does not exist" Dec 09 14:59:42 crc kubenswrapper[5107]: I1209 14:59:42.119432 5107 scope.go:117] "RemoveContainer" containerID="169d737323fa61136fa0b652a15316035559871415b44a78987aa3ba6b800da0" Dec 09 14:59:42 crc kubenswrapper[5107]: E1209 14:59:42.119764 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"169d737323fa61136fa0b652a15316035559871415b44a78987aa3ba6b800da0\": container with ID starting with 169d737323fa61136fa0b652a15316035559871415b44a78987aa3ba6b800da0 not found: ID does not exist" containerID="169d737323fa61136fa0b652a15316035559871415b44a78987aa3ba6b800da0" Dec 09 14:59:42 crc kubenswrapper[5107]: I1209 14:59:42.119793 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"169d737323fa61136fa0b652a15316035559871415b44a78987aa3ba6b800da0"} err="failed to get container status \"169d737323fa61136fa0b652a15316035559871415b44a78987aa3ba6b800da0\": rpc error: code = NotFound desc = could not find container \"169d737323fa61136fa0b652a15316035559871415b44a78987aa3ba6b800da0\": container with ID starting with 169d737323fa61136fa0b652a15316035559871415b44a78987aa3ba6b800da0 not found: ID does not exist" Dec 09 14:59:42 crc kubenswrapper[5107]: I1209 14:59:42.119814 5107 scope.go:117] "RemoveContainer" containerID="8ff8ee8fa9bdc13d78b24a7ed20b77f712c6d70019508ed464dea1e799dfc607" Dec 09 14:59:42 crc kubenswrapper[5107]: E1209 14:59:42.120409 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ff8ee8fa9bdc13d78b24a7ed20b77f712c6d70019508ed464dea1e799dfc607\": container with ID starting with 8ff8ee8fa9bdc13d78b24a7ed20b77f712c6d70019508ed464dea1e799dfc607 not found: ID does not exist" containerID="8ff8ee8fa9bdc13d78b24a7ed20b77f712c6d70019508ed464dea1e799dfc607" Dec 09 14:59:42 crc kubenswrapper[5107]: I1209 14:59:42.120508 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ff8ee8fa9bdc13d78b24a7ed20b77f712c6d70019508ed464dea1e799dfc607"} err="failed to get container status \"8ff8ee8fa9bdc13d78b24a7ed20b77f712c6d70019508ed464dea1e799dfc607\": rpc error: code = NotFound desc = could not find container \"8ff8ee8fa9bdc13d78b24a7ed20b77f712c6d70019508ed464dea1e799dfc607\": container with ID starting with 8ff8ee8fa9bdc13d78b24a7ed20b77f712c6d70019508ed464dea1e799dfc607 not found: ID does not exist" Dec 09 14:59:42 crc kubenswrapper[5107]: I1209 14:59:42.120555 5107 scope.go:117] "RemoveContainer" containerID="a7a842582313d50b3a1310b8e3656dc938751f1b63c38bd4eef8a2b6fae1c928" Dec 09 14:59:42 crc kubenswrapper[5107]: E1209 14:59:42.121075 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7a842582313d50b3a1310b8e3656dc938751f1b63c38bd4eef8a2b6fae1c928\": container with ID starting with a7a842582313d50b3a1310b8e3656dc938751f1b63c38bd4eef8a2b6fae1c928 not found: ID does not exist" containerID="a7a842582313d50b3a1310b8e3656dc938751f1b63c38bd4eef8a2b6fae1c928" Dec 09 14:59:42 crc kubenswrapper[5107]: I1209 14:59:42.121165 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7a842582313d50b3a1310b8e3656dc938751f1b63c38bd4eef8a2b6fae1c928"} err="failed to get container status \"a7a842582313d50b3a1310b8e3656dc938751f1b63c38bd4eef8a2b6fae1c928\": rpc error: code = NotFound desc = could not find container \"a7a842582313d50b3a1310b8e3656dc938751f1b63c38bd4eef8a2b6fae1c928\": container with ID starting with a7a842582313d50b3a1310b8e3656dc938751f1b63c38bd4eef8a2b6fae1c928 not found: ID does not exist" Dec 09 14:59:42 crc kubenswrapper[5107]: I1209 14:59:42.121249 5107 scope.go:117] "RemoveContainer" containerID="68a813d217d22a3cbcc2432609817ec96bf285d26990afb1a4552cae8b0ffd2b" Dec 09 14:59:42 crc kubenswrapper[5107]: E1209 14:59:42.121739 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68a813d217d22a3cbcc2432609817ec96bf285d26990afb1a4552cae8b0ffd2b\": container with ID starting with 68a813d217d22a3cbcc2432609817ec96bf285d26990afb1a4552cae8b0ffd2b not found: ID does not exist" containerID="68a813d217d22a3cbcc2432609817ec96bf285d26990afb1a4552cae8b0ffd2b" Dec 09 14:59:42 crc kubenswrapper[5107]: I1209 14:59:42.121769 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68a813d217d22a3cbcc2432609817ec96bf285d26990afb1a4552cae8b0ffd2b"} err="failed to get container status \"68a813d217d22a3cbcc2432609817ec96bf285d26990afb1a4552cae8b0ffd2b\": rpc error: code = NotFound desc = could not find container \"68a813d217d22a3cbcc2432609817ec96bf285d26990afb1a4552cae8b0ffd2b\": container with ID starting with 68a813d217d22a3cbcc2432609817ec96bf285d26990afb1a4552cae8b0ffd2b not found: ID does not exist" Dec 09 14:59:42 crc kubenswrapper[5107]: I1209 14:59:42.826003 5107 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 09 14:59:42 crc kubenswrapper[5107]: I1209 14:59:42.826556 5107 status_manager.go:895] "Failed to get status for pod" podUID="d36e4b34-f551-453a-bc2c-6a250acdb84e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 09 14:59:42 crc kubenswrapper[5107]: I1209 14:59:42.827150 5107 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 09 14:59:42 crc kubenswrapper[5107]: I1209 14:59:42.827542 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Dec 09 14:59:44 crc kubenswrapper[5107]: I1209 14:59:44.155051 5107 patch_prober.go:28] interesting pod/machine-config-daemon-9jq8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:59:44 crc kubenswrapper[5107]: I1209 14:59:44.156923 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" podUID="902902bc-6dc6-4c5f-8e1b-9399b7c813c7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:59:45 crc kubenswrapper[5107]: E1209 14:59:45.035202 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:59:45Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:59:45Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:59:45Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:59:45Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 09 14:59:45 crc kubenswrapper[5107]: E1209 14:59:45.036044 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 09 14:59:45 crc kubenswrapper[5107]: E1209 14:59:45.036527 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 09 14:59:45 crc kubenswrapper[5107]: E1209 14:59:45.036827 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 09 14:59:45 crc kubenswrapper[5107]: E1209 14:59:45.037220 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 09 14:59:45 crc kubenswrapper[5107]: E1209 14:59:45.037239 5107 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 09 14:59:47 crc kubenswrapper[5107]: E1209 14:59:47.217662 5107 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 09 14:59:47 crc kubenswrapper[5107]: E1209 14:59:47.218280 5107 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 09 14:59:47 crc kubenswrapper[5107]: E1209 14:59:47.218722 5107 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 09 14:59:47 crc kubenswrapper[5107]: E1209 14:59:47.219300 5107 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 09 14:59:47 crc kubenswrapper[5107]: E1209 14:59:47.219683 5107 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 09 14:59:47 crc kubenswrapper[5107]: I1209 14:59:47.219718 5107 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 09 14:59:47 crc kubenswrapper[5107]: E1209 14:59:47.220096 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="200ms" Dec 09 14:59:47 crc kubenswrapper[5107]: E1209 14:59:47.421035 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="400ms" Dec 09 14:59:47 crc kubenswrapper[5107]: E1209 14:59:47.658302 5107 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.163:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187f9410ec06df4a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:59:39.352784714 +0000 UTC m=+227.076489613,LastTimestamp:2025-12-09 14:59:39.352784714 +0000 UTC m=+227.076489613,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:59:47 crc kubenswrapper[5107]: E1209 14:59:47.822894 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="800ms" Dec 09 14:59:48 crc kubenswrapper[5107]: E1209 14:59:48.624065 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="1.6s" Dec 09 14:59:49 crc kubenswrapper[5107]: I1209 14:59:49.817303 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:59:49 crc kubenswrapper[5107]: I1209 14:59:49.818399 5107 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 09 14:59:49 crc kubenswrapper[5107]: I1209 14:59:49.818714 5107 status_manager.go:895] "Failed to get status for pod" podUID="d36e4b34-f551-453a-bc2c-6a250acdb84e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 09 14:59:49 crc kubenswrapper[5107]: I1209 14:59:49.841521 5107 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="50482b5b-33e4-4375-b4ec-a1c0ebe2c67b" Dec 09 14:59:49 crc kubenswrapper[5107]: I1209 14:59:49.841566 5107 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="50482b5b-33e4-4375-b4ec-a1c0ebe2c67b" Dec 09 14:59:49 crc kubenswrapper[5107]: E1209 14:59:49.842219 5107 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:59:49 crc kubenswrapper[5107]: I1209 14:59:49.842713 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:59:49 crc kubenswrapper[5107]: I1209 14:59:49.992810 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"72c05e6def0f45ac9ddab413ef09beb781169cb10eed30692df2ff46ca155865"} Dec 09 14:59:50 crc kubenswrapper[5107]: E1209 14:59:50.226017 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="3.2s" Dec 09 14:59:51 crc kubenswrapper[5107]: I1209 14:59:51.003863 5107 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="4fe80385fb9471c3789f8ef1ad5d21995963d93166348eddb5d6ec011e7b8751" exitCode=0 Dec 09 14:59:51 crc kubenswrapper[5107]: I1209 14:59:51.004163 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"4fe80385fb9471c3789f8ef1ad5d21995963d93166348eddb5d6ec011e7b8751"} Dec 09 14:59:51 crc kubenswrapper[5107]: I1209 14:59:51.006128 5107 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="50482b5b-33e4-4375-b4ec-a1c0ebe2c67b" Dec 09 14:59:51 crc kubenswrapper[5107]: I1209 14:59:51.006186 5107 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="50482b5b-33e4-4375-b4ec-a1c0ebe2c67b" Dec 09 14:59:51 crc kubenswrapper[5107]: E1209 14:59:51.007032 5107 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:59:51 crc kubenswrapper[5107]: I1209 14:59:51.007575 5107 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 09 14:59:51 crc kubenswrapper[5107]: I1209 14:59:51.008881 5107 status_manager.go:895] "Failed to get status for pod" podUID="d36e4b34-f551-453a-bc2c-6a250acdb84e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Dec 09 14:59:52 crc kubenswrapper[5107]: I1209 14:59:52.018488 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"99cb3f89d63eee9340c97b79c1a91584f8e972393709dfbb5c591aa79c0ccfb6"} Dec 09 14:59:52 crc kubenswrapper[5107]: I1209 14:59:52.018839 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"18967556c0430f3553ab080533bce41d9280e68a5b9f3133538827001314e8b3"} Dec 09 14:59:52 crc kubenswrapper[5107]: I1209 14:59:52.018851 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"23d7aa5d4b775cd49f4dc3e7de459db5978c4238ed9ccf9a71199b418f96504d"} Dec 09 14:59:52 crc kubenswrapper[5107]: I1209 14:59:52.022139 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 09 14:59:52 crc kubenswrapper[5107]: I1209 14:59:52.022199 5107 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="770b2ea4f5cd64cd90ef891f9e2a1f60e78f0b284926af871933d7ca8898b9ca" exitCode=1 Dec 09 14:59:52 crc kubenswrapper[5107]: I1209 14:59:52.022314 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"770b2ea4f5cd64cd90ef891f9e2a1f60e78f0b284926af871933d7ca8898b9ca"} Dec 09 14:59:52 crc kubenswrapper[5107]: I1209 14:59:52.023019 5107 scope.go:117] "RemoveContainer" containerID="770b2ea4f5cd64cd90ef891f9e2a1f60e78f0b284926af871933d7ca8898b9ca" Dec 09 14:59:53 crc kubenswrapper[5107]: I1209 14:59:53.036132 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 09 14:59:53 crc kubenswrapper[5107]: I1209 14:59:53.036775 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"ddb80d38c84b885f57753b3cfe9e35f803b141c1751600a8a75407f78a133efc"} Dec 09 14:59:53 crc kubenswrapper[5107]: I1209 14:59:53.043380 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"f4623b6cd85144ec1cbc33203351e53bcf4fa28da810c6d1903ebaeb6c0c560c"} Dec 09 14:59:53 crc kubenswrapper[5107]: I1209 14:59:53.043676 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"d09be53fc02d16fedc4a6d2f3f6296c4502b120dc242f986bb4465bac0b467a0"} Dec 09 14:59:53 crc kubenswrapper[5107]: I1209 14:59:53.043859 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:59:53 crc kubenswrapper[5107]: I1209 14:59:53.043728 5107 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="50482b5b-33e4-4375-b4ec-a1c0ebe2c67b" Dec 09 14:59:53 crc kubenswrapper[5107]: I1209 14:59:53.044129 5107 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="50482b5b-33e4-4375-b4ec-a1c0ebe2c67b" Dec 09 14:59:53 crc kubenswrapper[5107]: I1209 14:59:53.345862 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:59:54 crc kubenswrapper[5107]: I1209 14:59:54.843483 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:59:54 crc kubenswrapper[5107]: I1209 14:59:54.843543 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:59:54 crc kubenswrapper[5107]: I1209 14:59:54.849228 5107 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 09 14:59:54 crc kubenswrapper[5107]: [+]log ok Dec 09 14:59:54 crc kubenswrapper[5107]: [+]etcd ok Dec 09 14:59:54 crc kubenswrapper[5107]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 09 14:59:54 crc kubenswrapper[5107]: [+]poststarthook/openshift.io-api-request-count-filter ok Dec 09 14:59:54 crc kubenswrapper[5107]: [+]poststarthook/openshift.io-startkubeinformers ok Dec 09 14:59:54 crc kubenswrapper[5107]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Dec 09 14:59:54 crc kubenswrapper[5107]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Dec 09 14:59:54 crc kubenswrapper[5107]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 09 14:59:54 crc kubenswrapper[5107]: [+]poststarthook/generic-apiserver-start-informers ok Dec 09 14:59:54 crc kubenswrapper[5107]: [+]poststarthook/priority-and-fairness-config-consumer ok Dec 09 14:59:54 crc kubenswrapper[5107]: [+]poststarthook/priority-and-fairness-filter ok Dec 09 14:59:54 crc kubenswrapper[5107]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 09 14:59:54 crc kubenswrapper[5107]: [+]poststarthook/start-apiextensions-informers ok Dec 09 14:59:54 crc kubenswrapper[5107]: [+]poststarthook/start-apiextensions-controllers ok Dec 09 14:59:54 crc kubenswrapper[5107]: [+]poststarthook/crd-informer-synced ok Dec 09 14:59:54 crc kubenswrapper[5107]: [+]poststarthook/start-system-namespaces-controller ok Dec 09 14:59:54 crc kubenswrapper[5107]: [+]poststarthook/start-cluster-authentication-info-controller ok Dec 09 14:59:54 crc kubenswrapper[5107]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Dec 09 14:59:54 crc kubenswrapper[5107]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Dec 09 14:59:54 crc kubenswrapper[5107]: [+]poststarthook/start-legacy-token-tracking-controller ok Dec 09 14:59:54 crc kubenswrapper[5107]: [+]poststarthook/start-service-ip-repair-controllers ok Dec 09 14:59:54 crc kubenswrapper[5107]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Dec 09 14:59:54 crc kubenswrapper[5107]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Dec 09 14:59:54 crc kubenswrapper[5107]: [+]poststarthook/priority-and-fairness-config-producer ok Dec 09 14:59:54 crc kubenswrapper[5107]: [+]poststarthook/bootstrap-controller ok Dec 09 14:59:54 crc kubenswrapper[5107]: [+]poststarthook/start-kubernetes-service-cidr-controller ok Dec 09 14:59:54 crc kubenswrapper[5107]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Dec 09 14:59:54 crc kubenswrapper[5107]: [+]poststarthook/start-kube-aggregator-informers ok Dec 09 14:59:54 crc kubenswrapper[5107]: [+]poststarthook/apiservice-status-local-available-controller ok Dec 09 14:59:54 crc kubenswrapper[5107]: [+]poststarthook/apiservice-status-remote-available-controller ok Dec 09 14:59:54 crc kubenswrapper[5107]: [+]poststarthook/apiservice-registration-controller ok Dec 09 14:59:54 crc kubenswrapper[5107]: [+]poststarthook/apiservice-wait-for-first-sync ok Dec 09 14:59:54 crc kubenswrapper[5107]: [+]poststarthook/apiservice-discovery-controller ok Dec 09 14:59:54 crc kubenswrapper[5107]: [+]poststarthook/kube-apiserver-autoregistration ok Dec 09 14:59:54 crc kubenswrapper[5107]: [+]autoregister-completion ok Dec 09 14:59:54 crc kubenswrapper[5107]: [+]poststarthook/apiservice-openapi-controller ok Dec 09 14:59:54 crc kubenswrapper[5107]: [+]poststarthook/apiservice-openapiv3-controller ok Dec 09 14:59:54 crc kubenswrapper[5107]: livez check failed Dec 09 14:59:54 crc kubenswrapper[5107]: I1209 14:59:54.849348 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="57755cc5f99000cc11e193051474d4e2" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:59:58 crc kubenswrapper[5107]: I1209 14:59:58.665148 5107 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:59:58 crc kubenswrapper[5107]: I1209 14:59:58.666241 5107 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:59:58 crc kubenswrapper[5107]: I1209 14:59:58.777350 5107 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="5485859f-c402-41cb-8092-9cdf56697b18" Dec 09 14:59:59 crc kubenswrapper[5107]: I1209 14:59:59.079313 5107 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="50482b5b-33e4-4375-b4ec-a1c0ebe2c67b" Dec 09 14:59:59 crc kubenswrapper[5107]: I1209 14:59:59.079384 5107 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="50482b5b-33e4-4375-b4ec-a1c0ebe2c67b" Dec 09 14:59:59 crc kubenswrapper[5107]: I1209 14:59:59.083406 5107 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="5485859f-c402-41cb-8092-9cdf56697b18" Dec 09 15:00:00 crc kubenswrapper[5107]: I1209 15:00:00.581958 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 15:00:00 crc kubenswrapper[5107]: I1209 15:00:00.593135 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 15:00:05 crc kubenswrapper[5107]: I1209 15:00:05.540704 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Dec 09 15:00:05 crc kubenswrapper[5107]: I1209 15:00:05.617314 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Dec 09 15:00:06 crc kubenswrapper[5107]: I1209 15:00:06.198168 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Dec 09 15:00:06 crc kubenswrapper[5107]: I1209 15:00:06.206611 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 09 15:00:06 crc kubenswrapper[5107]: I1209 15:00:06.412744 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 09 15:00:07 crc kubenswrapper[5107]: I1209 15:00:07.412623 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Dec 09 15:00:07 crc kubenswrapper[5107]: I1209 15:00:07.509609 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Dec 09 15:00:07 crc kubenswrapper[5107]: I1209 15:00:07.998439 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Dec 09 15:00:08 crc kubenswrapper[5107]: I1209 15:00:08.701808 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 09 15:00:08 crc kubenswrapper[5107]: I1209 15:00:08.739498 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Dec 09 15:00:09 crc kubenswrapper[5107]: I1209 15:00:09.060293 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Dec 09 15:00:09 crc kubenswrapper[5107]: I1209 15:00:09.388716 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Dec 09 15:00:09 crc kubenswrapper[5107]: I1209 15:00:09.408118 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Dec 09 15:00:10 crc kubenswrapper[5107]: I1209 15:00:10.078943 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Dec 09 15:00:10 crc kubenswrapper[5107]: I1209 15:00:10.079781 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Dec 09 15:00:10 crc kubenswrapper[5107]: I1209 15:00:10.601913 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 09 15:00:10 crc kubenswrapper[5107]: I1209 15:00:10.622660 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Dec 09 15:00:10 crc kubenswrapper[5107]: I1209 15:00:10.870114 5107 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Dec 09 15:00:11 crc kubenswrapper[5107]: I1209 15:00:11.088191 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 09 15:00:11 crc kubenswrapper[5107]: I1209 15:00:11.097739 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 15:00:11 crc kubenswrapper[5107]: I1209 15:00:11.149314 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 09 15:00:11 crc kubenswrapper[5107]: I1209 15:00:11.153186 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Dec 09 15:00:11 crc kubenswrapper[5107]: I1209 15:00:11.177096 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 09 15:00:11 crc kubenswrapper[5107]: I1209 15:00:11.344294 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Dec 09 15:00:11 crc kubenswrapper[5107]: I1209 15:00:11.367138 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Dec 09 15:00:11 crc kubenswrapper[5107]: I1209 15:00:11.405213 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 09 15:00:11 crc kubenswrapper[5107]: I1209 15:00:11.432863 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Dec 09 15:00:11 crc kubenswrapper[5107]: I1209 15:00:11.436070 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Dec 09 15:00:11 crc kubenswrapper[5107]: I1209 15:00:11.569841 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Dec 09 15:00:11 crc kubenswrapper[5107]: I1209 15:00:11.850096 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Dec 09 15:00:11 crc kubenswrapper[5107]: I1209 15:00:11.899214 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 09 15:00:12 crc kubenswrapper[5107]: I1209 15:00:12.024279 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Dec 09 15:00:12 crc kubenswrapper[5107]: I1209 15:00:12.042308 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Dec 09 15:00:12 crc kubenswrapper[5107]: I1209 15:00:12.103567 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 09 15:00:12 crc kubenswrapper[5107]: I1209 15:00:12.223973 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Dec 09 15:00:12 crc kubenswrapper[5107]: I1209 15:00:12.370121 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 09 15:00:12 crc kubenswrapper[5107]: I1209 15:00:12.423429 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 09 15:00:12 crc kubenswrapper[5107]: I1209 15:00:12.705004 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 09 15:00:12 crc kubenswrapper[5107]: I1209 15:00:12.724048 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Dec 09 15:00:12 crc kubenswrapper[5107]: I1209 15:00:12.739306 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Dec 09 15:00:12 crc kubenswrapper[5107]: I1209 15:00:12.843245 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Dec 09 15:00:12 crc kubenswrapper[5107]: I1209 15:00:12.883029 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Dec 09 15:00:12 crc kubenswrapper[5107]: I1209 15:00:12.920060 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 09 15:00:13 crc kubenswrapper[5107]: I1209 15:00:13.125837 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Dec 09 15:00:13 crc kubenswrapper[5107]: I1209 15:00:13.141439 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Dec 09 15:00:13 crc kubenswrapper[5107]: I1209 15:00:13.184831 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Dec 09 15:00:13 crc kubenswrapper[5107]: I1209 15:00:13.221687 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Dec 09 15:00:13 crc kubenswrapper[5107]: I1209 15:00:13.310110 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Dec 09 15:00:13 crc kubenswrapper[5107]: I1209 15:00:13.359012 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 09 15:00:13 crc kubenswrapper[5107]: I1209 15:00:13.377376 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Dec 09 15:00:13 crc kubenswrapper[5107]: I1209 15:00:13.422799 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Dec 09 15:00:13 crc kubenswrapper[5107]: I1209 15:00:13.440381 5107 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Dec 09 15:00:13 crc kubenswrapper[5107]: I1209 15:00:13.484150 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Dec 09 15:00:13 crc kubenswrapper[5107]: I1209 15:00:13.490250 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Dec 09 15:00:13 crc kubenswrapper[5107]: I1209 15:00:13.542103 5107 ???:1] "http: TLS handshake error from 192.168.126.11:56394: no serving certificate available for the kubelet" Dec 09 15:00:13 crc kubenswrapper[5107]: I1209 15:00:13.606569 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Dec 09 15:00:13 crc kubenswrapper[5107]: I1209 15:00:13.649918 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Dec 09 15:00:13 crc kubenswrapper[5107]: I1209 15:00:13.730058 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Dec 09 15:00:13 crc kubenswrapper[5107]: I1209 15:00:13.831315 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Dec 09 15:00:13 crc kubenswrapper[5107]: I1209 15:00:13.874103 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Dec 09 15:00:14 crc kubenswrapper[5107]: I1209 15:00:14.025163 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Dec 09 15:00:14 crc kubenswrapper[5107]: I1209 15:00:14.095781 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 09 15:00:14 crc kubenswrapper[5107]: I1209 15:00:14.095953 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 09 15:00:14 crc kubenswrapper[5107]: I1209 15:00:14.123679 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Dec 09 15:00:14 crc kubenswrapper[5107]: I1209 15:00:14.142968 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Dec 09 15:00:14 crc kubenswrapper[5107]: I1209 15:00:14.154456 5107 patch_prober.go:28] interesting pod/machine-config-daemon-9jq8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:00:14 crc kubenswrapper[5107]: I1209 15:00:14.154513 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" podUID="902902bc-6dc6-4c5f-8e1b-9399b7c813c7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:00:14 crc kubenswrapper[5107]: I1209 15:00:14.221737 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Dec 09 15:00:14 crc kubenswrapper[5107]: I1209 15:00:14.407088 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Dec 09 15:00:14 crc kubenswrapper[5107]: I1209 15:00:14.408307 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Dec 09 15:00:14 crc kubenswrapper[5107]: I1209 15:00:14.436580 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Dec 09 15:00:14 crc kubenswrapper[5107]: I1209 15:00:14.549737 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Dec 09 15:00:14 crc kubenswrapper[5107]: I1209 15:00:14.564807 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Dec 09 15:00:14 crc kubenswrapper[5107]: I1209 15:00:14.567034 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Dec 09 15:00:14 crc kubenswrapper[5107]: I1209 15:00:14.585056 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Dec 09 15:00:14 crc kubenswrapper[5107]: I1209 15:00:14.607841 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 09 15:00:14 crc kubenswrapper[5107]: I1209 15:00:14.773943 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Dec 09 15:00:14 crc kubenswrapper[5107]: I1209 15:00:14.787900 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 09 15:00:14 crc kubenswrapper[5107]: I1209 15:00:14.862032 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Dec 09 15:00:14 crc kubenswrapper[5107]: I1209 15:00:14.896785 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Dec 09 15:00:14 crc kubenswrapper[5107]: I1209 15:00:14.907294 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Dec 09 15:00:15 crc kubenswrapper[5107]: I1209 15:00:15.037589 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Dec 09 15:00:15 crc kubenswrapper[5107]: I1209 15:00:15.062410 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Dec 09 15:00:15 crc kubenswrapper[5107]: I1209 15:00:15.078379 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Dec 09 15:00:15 crc kubenswrapper[5107]: I1209 15:00:15.257223 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Dec 09 15:00:15 crc kubenswrapper[5107]: I1209 15:00:15.259131 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Dec 09 15:00:15 crc kubenswrapper[5107]: I1209 15:00:15.264752 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Dec 09 15:00:15 crc kubenswrapper[5107]: I1209 15:00:15.415517 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Dec 09 15:00:15 crc kubenswrapper[5107]: I1209 15:00:15.447013 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Dec 09 15:00:15 crc kubenswrapper[5107]: I1209 15:00:15.541686 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Dec 09 15:00:15 crc kubenswrapper[5107]: I1209 15:00:15.632137 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Dec 09 15:00:15 crc kubenswrapper[5107]: I1209 15:00:15.632372 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Dec 09 15:00:15 crc kubenswrapper[5107]: I1209 15:00:15.669398 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Dec 09 15:00:15 crc kubenswrapper[5107]: I1209 15:00:15.738596 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Dec 09 15:00:15 crc kubenswrapper[5107]: I1209 15:00:15.772413 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Dec 09 15:00:15 crc kubenswrapper[5107]: I1209 15:00:15.791476 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Dec 09 15:00:15 crc kubenswrapper[5107]: I1209 15:00:15.842462 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Dec 09 15:00:15 crc kubenswrapper[5107]: I1209 15:00:15.868711 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Dec 09 15:00:16 crc kubenswrapper[5107]: I1209 15:00:16.046420 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Dec 09 15:00:16 crc kubenswrapper[5107]: I1209 15:00:16.069541 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Dec 09 15:00:16 crc kubenswrapper[5107]: I1209 15:00:16.190689 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Dec 09 15:00:16 crc kubenswrapper[5107]: I1209 15:00:16.337734 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Dec 09 15:00:16 crc kubenswrapper[5107]: I1209 15:00:16.451972 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Dec 09 15:00:16 crc kubenswrapper[5107]: I1209 15:00:16.517902 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Dec 09 15:00:16 crc kubenswrapper[5107]: I1209 15:00:16.572712 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 09 15:00:16 crc kubenswrapper[5107]: I1209 15:00:16.676306 5107 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 09 15:00:16 crc kubenswrapper[5107]: I1209 15:00:16.676388 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Dec 09 15:00:16 crc kubenswrapper[5107]: I1209 15:00:16.691449 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Dec 09 15:00:16 crc kubenswrapper[5107]: I1209 15:00:16.880317 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 09 15:00:16 crc kubenswrapper[5107]: I1209 15:00:16.887880 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Dec 09 15:00:16 crc kubenswrapper[5107]: I1209 15:00:16.892354 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Dec 09 15:00:16 crc kubenswrapper[5107]: I1209 15:00:16.916427 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Dec 09 15:00:16 crc kubenswrapper[5107]: I1209 15:00:16.949228 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Dec 09 15:00:16 crc kubenswrapper[5107]: I1209 15:00:16.950434 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Dec 09 15:00:17 crc kubenswrapper[5107]: I1209 15:00:17.062688 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Dec 09 15:00:17 crc kubenswrapper[5107]: I1209 15:00:17.096691 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 09 15:00:17 crc kubenswrapper[5107]: I1209 15:00:17.098725 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 09 15:00:17 crc kubenswrapper[5107]: I1209 15:00:17.238842 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Dec 09 15:00:17 crc kubenswrapper[5107]: I1209 15:00:17.303234 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Dec 09 15:00:17 crc kubenswrapper[5107]: I1209 15:00:17.315261 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Dec 09 15:00:17 crc kubenswrapper[5107]: I1209 15:00:17.421008 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 09 15:00:17 crc kubenswrapper[5107]: I1209 15:00:17.499167 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Dec 09 15:00:17 crc kubenswrapper[5107]: I1209 15:00:17.547634 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 09 15:00:17 crc kubenswrapper[5107]: I1209 15:00:17.627167 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Dec 09 15:00:17 crc kubenswrapper[5107]: I1209 15:00:17.723304 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Dec 09 15:00:17 crc kubenswrapper[5107]: I1209 15:00:17.743982 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Dec 09 15:00:17 crc kubenswrapper[5107]: I1209 15:00:17.817314 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Dec 09 15:00:17 crc kubenswrapper[5107]: I1209 15:00:17.838934 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Dec 09 15:00:17 crc kubenswrapper[5107]: I1209 15:00:17.856174 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Dec 09 15:00:17 crc kubenswrapper[5107]: I1209 15:00:17.873263 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 09 15:00:17 crc kubenswrapper[5107]: I1209 15:00:17.970073 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Dec 09 15:00:18 crc kubenswrapper[5107]: I1209 15:00:18.018495 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Dec 09 15:00:18 crc kubenswrapper[5107]: I1209 15:00:18.035532 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Dec 09 15:00:18 crc kubenswrapper[5107]: I1209 15:00:18.054687 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Dec 09 15:00:18 crc kubenswrapper[5107]: I1209 15:00:18.079730 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Dec 09 15:00:18 crc kubenswrapper[5107]: I1209 15:00:18.142477 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Dec 09 15:00:18 crc kubenswrapper[5107]: I1209 15:00:18.175694 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Dec 09 15:00:18 crc kubenswrapper[5107]: I1209 15:00:18.240048 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 09 15:00:18 crc kubenswrapper[5107]: I1209 15:00:18.241026 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Dec 09 15:00:18 crc kubenswrapper[5107]: I1209 15:00:18.280613 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Dec 09 15:00:18 crc kubenswrapper[5107]: I1209 15:00:18.295853 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 09 15:00:18 crc kubenswrapper[5107]: I1209 15:00:18.334714 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Dec 09 15:00:18 crc kubenswrapper[5107]: I1209 15:00:18.344516 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Dec 09 15:00:18 crc kubenswrapper[5107]: I1209 15:00:18.351354 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 09 15:00:18 crc kubenswrapper[5107]: I1209 15:00:18.372268 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Dec 09 15:00:18 crc kubenswrapper[5107]: I1209 15:00:18.404113 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Dec 09 15:00:18 crc kubenswrapper[5107]: I1209 15:00:18.411088 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Dec 09 15:00:18 crc kubenswrapper[5107]: I1209 15:00:18.418981 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Dec 09 15:00:18 crc kubenswrapper[5107]: I1209 15:00:18.463078 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Dec 09 15:00:18 crc kubenswrapper[5107]: I1209 15:00:18.499608 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Dec 09 15:00:18 crc kubenswrapper[5107]: I1209 15:00:18.566998 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Dec 09 15:00:18 crc kubenswrapper[5107]: I1209 15:00:18.594712 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 09 15:00:18 crc kubenswrapper[5107]: I1209 15:00:18.609964 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Dec 09 15:00:18 crc kubenswrapper[5107]: I1209 15:00:18.652769 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Dec 09 15:00:18 crc kubenswrapper[5107]: I1209 15:00:18.679577 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Dec 09 15:00:18 crc kubenswrapper[5107]: I1209 15:00:18.710746 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Dec 09 15:00:18 crc kubenswrapper[5107]: I1209 15:00:18.825777 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Dec 09 15:00:18 crc kubenswrapper[5107]: I1209 15:00:18.868385 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Dec 09 15:00:18 crc kubenswrapper[5107]: I1209 15:00:18.904393 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Dec 09 15:00:18 crc kubenswrapper[5107]: I1209 15:00:18.970073 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Dec 09 15:00:18 crc kubenswrapper[5107]: I1209 15:00:18.988095 5107 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Dec 09 15:00:18 crc kubenswrapper[5107]: I1209 15:00:18.988958 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=39.988947999 podStartE2EDuration="39.988947999s" podCreationTimestamp="2025-12-09 14:59:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:59:58.704853719 +0000 UTC m=+246.428558658" watchObservedRunningTime="2025-12-09 15:00:18.988947999 +0000 UTC m=+266.712652878" Dec 09 15:00:18 crc kubenswrapper[5107]: I1209 15:00:18.997493 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 09 15:00:18 crc kubenswrapper[5107]: I1209 15:00:18.997547 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 09 15:00:19 crc kubenswrapper[5107]: I1209 15:00:19.001942 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 15:00:19 crc kubenswrapper[5107]: I1209 15:00:19.018886 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=21.018866439 podStartE2EDuration="21.018866439s" podCreationTimestamp="2025-12-09 14:59:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 15:00:19.016285259 +0000 UTC m=+266.739990148" watchObservedRunningTime="2025-12-09 15:00:19.018866439 +0000 UTC m=+266.742571328" Dec 09 15:00:19 crc kubenswrapper[5107]: I1209 15:00:19.268814 5107 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Dec 09 15:00:19 crc kubenswrapper[5107]: I1209 15:00:19.280368 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Dec 09 15:00:19 crc kubenswrapper[5107]: I1209 15:00:19.347307 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Dec 09 15:00:19 crc kubenswrapper[5107]: I1209 15:00:19.378237 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Dec 09 15:00:19 crc kubenswrapper[5107]: I1209 15:00:19.441158 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Dec 09 15:00:19 crc kubenswrapper[5107]: I1209 15:00:19.509468 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Dec 09 15:00:19 crc kubenswrapper[5107]: I1209 15:00:19.570281 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Dec 09 15:00:19 crc kubenswrapper[5107]: I1209 15:00:19.649345 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Dec 09 15:00:19 crc kubenswrapper[5107]: I1209 15:00:19.783043 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Dec 09 15:00:19 crc kubenswrapper[5107]: I1209 15:00:19.847401 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 15:00:19 crc kubenswrapper[5107]: I1209 15:00:19.915633 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Dec 09 15:00:19 crc kubenswrapper[5107]: I1209 15:00:19.919144 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Dec 09 15:00:20 crc kubenswrapper[5107]: I1209 15:00:20.053506 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Dec 09 15:00:20 crc kubenswrapper[5107]: I1209 15:00:20.136739 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Dec 09 15:00:20 crc kubenswrapper[5107]: I1209 15:00:20.182199 5107 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Dec 09 15:00:20 crc kubenswrapper[5107]: I1209 15:00:20.206156 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 15:00:20 crc kubenswrapper[5107]: I1209 15:00:20.214503 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Dec 09 15:00:20 crc kubenswrapper[5107]: I1209 15:00:20.283095 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Dec 09 15:00:20 crc kubenswrapper[5107]: I1209 15:00:20.283504 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Dec 09 15:00:20 crc kubenswrapper[5107]: I1209 15:00:20.284090 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Dec 09 15:00:20 crc kubenswrapper[5107]: I1209 15:00:20.305001 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Dec 09 15:00:20 crc kubenswrapper[5107]: I1209 15:00:20.382518 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Dec 09 15:00:20 crc kubenswrapper[5107]: I1209 15:00:20.455729 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Dec 09 15:00:20 crc kubenswrapper[5107]: I1209 15:00:20.480963 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Dec 09 15:00:20 crc kubenswrapper[5107]: I1209 15:00:20.561671 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Dec 09 15:00:20 crc kubenswrapper[5107]: I1209 15:00:20.658157 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Dec 09 15:00:20 crc kubenswrapper[5107]: I1209 15:00:20.737966 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Dec 09 15:00:20 crc kubenswrapper[5107]: I1209 15:00:20.753808 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 09 15:00:20 crc kubenswrapper[5107]: I1209 15:00:20.776973 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Dec 09 15:00:20 crc kubenswrapper[5107]: I1209 15:00:20.843027 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Dec 09 15:00:20 crc kubenswrapper[5107]: I1209 15:00:20.871776 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Dec 09 15:00:20 crc kubenswrapper[5107]: I1209 15:00:20.891066 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421540-mfd9d"] Dec 09 15:00:20 crc kubenswrapper[5107]: I1209 15:00:20.892680 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d36e4b34-f551-453a-bc2c-6a250acdb84e" containerName="installer" Dec 09 15:00:20 crc kubenswrapper[5107]: I1209 15:00:20.892704 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="d36e4b34-f551-453a-bc2c-6a250acdb84e" containerName="installer" Dec 09 15:00:20 crc kubenswrapper[5107]: I1209 15:00:20.892837 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="d36e4b34-f551-453a-bc2c-6a250acdb84e" containerName="installer" Dec 09 15:00:20 crc kubenswrapper[5107]: I1209 15:00:20.896588 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421540-mfd9d"] Dec 09 15:00:20 crc kubenswrapper[5107]: I1209 15:00:20.896744 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421540-mfd9d" Dec 09 15:00:20 crc kubenswrapper[5107]: I1209 15:00:20.899355 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 09 15:00:20 crc kubenswrapper[5107]: I1209 15:00:20.899636 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 09 15:00:20 crc kubenswrapper[5107]: I1209 15:00:20.904703 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Dec 09 15:00:20 crc kubenswrapper[5107]: I1209 15:00:20.962022 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e4af015-72e9-44a9-9944-380db3d717fa-config-volume\") pod \"collect-profiles-29421540-mfd9d\" (UID: \"0e4af015-72e9-44a9-9944-380db3d717fa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421540-mfd9d" Dec 09 15:00:20 crc kubenswrapper[5107]: I1209 15:00:20.962374 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0e4af015-72e9-44a9-9944-380db3d717fa-secret-volume\") pod \"collect-profiles-29421540-mfd9d\" (UID: \"0e4af015-72e9-44a9-9944-380db3d717fa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421540-mfd9d" Dec 09 15:00:20 crc kubenswrapper[5107]: I1209 15:00:20.962415 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv7dg\" (UniqueName: \"kubernetes.io/projected/0e4af015-72e9-44a9-9944-380db3d717fa-kube-api-access-wv7dg\") pod \"collect-profiles-29421540-mfd9d\" (UID: \"0e4af015-72e9-44a9-9944-380db3d717fa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421540-mfd9d" Dec 09 15:00:21 crc kubenswrapper[5107]: I1209 15:00:21.043890 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Dec 09 15:00:21 crc kubenswrapper[5107]: I1209 15:00:21.063799 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0e4af015-72e9-44a9-9944-380db3d717fa-secret-volume\") pod \"collect-profiles-29421540-mfd9d\" (UID: \"0e4af015-72e9-44a9-9944-380db3d717fa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421540-mfd9d" Dec 09 15:00:21 crc kubenswrapper[5107]: I1209 15:00:21.063861 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wv7dg\" (UniqueName: \"kubernetes.io/projected/0e4af015-72e9-44a9-9944-380db3d717fa-kube-api-access-wv7dg\") pod \"collect-profiles-29421540-mfd9d\" (UID: \"0e4af015-72e9-44a9-9944-380db3d717fa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421540-mfd9d" Dec 09 15:00:21 crc kubenswrapper[5107]: I1209 15:00:21.063913 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e4af015-72e9-44a9-9944-380db3d717fa-config-volume\") pod \"collect-profiles-29421540-mfd9d\" (UID: \"0e4af015-72e9-44a9-9944-380db3d717fa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421540-mfd9d" Dec 09 15:00:21 crc kubenswrapper[5107]: I1209 15:00:21.065414 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e4af015-72e9-44a9-9944-380db3d717fa-config-volume\") pod \"collect-profiles-29421540-mfd9d\" (UID: \"0e4af015-72e9-44a9-9944-380db3d717fa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421540-mfd9d" Dec 09 15:00:21 crc kubenswrapper[5107]: I1209 15:00:21.076347 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0e4af015-72e9-44a9-9944-380db3d717fa-secret-volume\") pod \"collect-profiles-29421540-mfd9d\" (UID: \"0e4af015-72e9-44a9-9944-380db3d717fa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421540-mfd9d" Dec 09 15:00:21 crc kubenswrapper[5107]: I1209 15:00:21.085640 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv7dg\" (UniqueName: \"kubernetes.io/projected/0e4af015-72e9-44a9-9944-380db3d717fa-kube-api-access-wv7dg\") pod \"collect-profiles-29421540-mfd9d\" (UID: \"0e4af015-72e9-44a9-9944-380db3d717fa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421540-mfd9d" Dec 09 15:00:21 crc kubenswrapper[5107]: I1209 15:00:21.116086 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Dec 09 15:00:21 crc kubenswrapper[5107]: I1209 15:00:21.118857 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Dec 09 15:00:21 crc kubenswrapper[5107]: I1209 15:00:21.124138 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Dec 09 15:00:21 crc kubenswrapper[5107]: I1209 15:00:21.179399 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Dec 09 15:00:21 crc kubenswrapper[5107]: I1209 15:00:21.214151 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Dec 09 15:00:21 crc kubenswrapper[5107]: I1209 15:00:21.220993 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421540-mfd9d" Dec 09 15:00:21 crc kubenswrapper[5107]: I1209 15:00:21.241879 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Dec 09 15:00:21 crc kubenswrapper[5107]: I1209 15:00:21.355918 5107 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 09 15:00:21 crc kubenswrapper[5107]: I1209 15:00:21.356181 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://09a73ce7d54219acfad888d879b4d923aeccef63c91f02b44f7f8cb7e2582842" gracePeriod=5 Dec 09 15:00:21 crc kubenswrapper[5107]: I1209 15:00:21.375060 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Dec 09 15:00:21 crc kubenswrapper[5107]: I1209 15:00:21.395570 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Dec 09 15:00:21 crc kubenswrapper[5107]: I1209 15:00:21.427068 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421540-mfd9d"] Dec 09 15:00:21 crc kubenswrapper[5107]: I1209 15:00:21.587983 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Dec 09 15:00:21 crc kubenswrapper[5107]: I1209 15:00:21.619539 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Dec 09 15:00:21 crc kubenswrapper[5107]: I1209 15:00:21.662033 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Dec 09 15:00:21 crc kubenswrapper[5107]: I1209 15:00:21.669666 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Dec 09 15:00:21 crc kubenswrapper[5107]: I1209 15:00:21.709900 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Dec 09 15:00:21 crc kubenswrapper[5107]: I1209 15:00:21.736893 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Dec 09 15:00:21 crc kubenswrapper[5107]: I1209 15:00:21.743428 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Dec 09 15:00:21 crc kubenswrapper[5107]: I1209 15:00:21.759302 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Dec 09 15:00:21 crc kubenswrapper[5107]: I1209 15:00:21.768489 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Dec 09 15:00:21 crc kubenswrapper[5107]: I1209 15:00:21.860041 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Dec 09 15:00:21 crc kubenswrapper[5107]: I1209 15:00:21.974785 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Dec 09 15:00:22 crc kubenswrapper[5107]: I1209 15:00:22.002345 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Dec 09 15:00:22 crc kubenswrapper[5107]: I1209 15:00:22.019836 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Dec 09 15:00:22 crc kubenswrapper[5107]: I1209 15:00:22.033035 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Dec 09 15:00:22 crc kubenswrapper[5107]: I1209 15:00:22.080841 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 09 15:00:22 crc kubenswrapper[5107]: I1209 15:00:22.135148 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 09 15:00:22 crc kubenswrapper[5107]: I1209 15:00:22.214765 5107 generic.go:358] "Generic (PLEG): container finished" podID="0e4af015-72e9-44a9-9944-380db3d717fa" containerID="b716de429fe2aada9b1faef8d7e59d3fed2ae28bd089c06850cb4e94081a24c8" exitCode=0 Dec 09 15:00:22 crc kubenswrapper[5107]: I1209 15:00:22.214890 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421540-mfd9d" event={"ID":"0e4af015-72e9-44a9-9944-380db3d717fa","Type":"ContainerDied","Data":"b716de429fe2aada9b1faef8d7e59d3fed2ae28bd089c06850cb4e94081a24c8"} Dec 09 15:00:22 crc kubenswrapper[5107]: I1209 15:00:22.214929 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421540-mfd9d" event={"ID":"0e4af015-72e9-44a9-9944-380db3d717fa","Type":"ContainerStarted","Data":"8663d2e409dde413b5dc9b4b18325d4e2e4bac919140ecaf2e96e2bb0d0b7748"} Dec 09 15:00:22 crc kubenswrapper[5107]: I1209 15:00:22.308461 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Dec 09 15:00:22 crc kubenswrapper[5107]: I1209 15:00:22.318992 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Dec 09 15:00:22 crc kubenswrapper[5107]: I1209 15:00:22.379082 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Dec 09 15:00:22 crc kubenswrapper[5107]: I1209 15:00:22.500709 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 09 15:00:22 crc kubenswrapper[5107]: I1209 15:00:22.581362 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 09 15:00:22 crc kubenswrapper[5107]: I1209 15:00:22.604435 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Dec 09 15:00:22 crc kubenswrapper[5107]: I1209 15:00:22.694881 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 09 15:00:22 crc kubenswrapper[5107]: I1209 15:00:22.701125 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 09 15:00:22 crc kubenswrapper[5107]: I1209 15:00:22.781443 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Dec 09 15:00:22 crc kubenswrapper[5107]: I1209 15:00:22.870878 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Dec 09 15:00:22 crc kubenswrapper[5107]: I1209 15:00:22.887166 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Dec 09 15:00:23 crc kubenswrapper[5107]: I1209 15:00:23.001758 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Dec 09 15:00:23 crc kubenswrapper[5107]: I1209 15:00:23.053407 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 09 15:00:23 crc kubenswrapper[5107]: I1209 15:00:23.081857 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Dec 09 15:00:23 crc kubenswrapper[5107]: I1209 15:00:23.088189 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Dec 09 15:00:23 crc kubenswrapper[5107]: I1209 15:00:23.153238 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Dec 09 15:00:23 crc kubenswrapper[5107]: I1209 15:00:23.207678 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Dec 09 15:00:23 crc kubenswrapper[5107]: I1209 15:00:23.277263 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Dec 09 15:00:23 crc kubenswrapper[5107]: I1209 15:00:23.298904 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 09 15:00:23 crc kubenswrapper[5107]: I1209 15:00:23.420249 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421540-mfd9d" Dec 09 15:00:23 crc kubenswrapper[5107]: I1209 15:00:23.474603 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Dec 09 15:00:23 crc kubenswrapper[5107]: I1209 15:00:23.503213 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wv7dg\" (UniqueName: \"kubernetes.io/projected/0e4af015-72e9-44a9-9944-380db3d717fa-kube-api-access-wv7dg\") pod \"0e4af015-72e9-44a9-9944-380db3d717fa\" (UID: \"0e4af015-72e9-44a9-9944-380db3d717fa\") " Dec 09 15:00:23 crc kubenswrapper[5107]: I1209 15:00:23.503349 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0e4af015-72e9-44a9-9944-380db3d717fa-secret-volume\") pod \"0e4af015-72e9-44a9-9944-380db3d717fa\" (UID: \"0e4af015-72e9-44a9-9944-380db3d717fa\") " Dec 09 15:00:23 crc kubenswrapper[5107]: I1209 15:00:23.503466 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e4af015-72e9-44a9-9944-380db3d717fa-config-volume\") pod \"0e4af015-72e9-44a9-9944-380db3d717fa\" (UID: \"0e4af015-72e9-44a9-9944-380db3d717fa\") " Dec 09 15:00:23 crc kubenswrapper[5107]: I1209 15:00:23.504389 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e4af015-72e9-44a9-9944-380db3d717fa-config-volume" (OuterVolumeSpecName: "config-volume") pod "0e4af015-72e9-44a9-9944-380db3d717fa" (UID: "0e4af015-72e9-44a9-9944-380db3d717fa"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 15:00:23 crc kubenswrapper[5107]: I1209 15:00:23.509678 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e4af015-72e9-44a9-9944-380db3d717fa-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0e4af015-72e9-44a9-9944-380db3d717fa" (UID: "0e4af015-72e9-44a9-9944-380db3d717fa"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 15:00:23 crc kubenswrapper[5107]: I1209 15:00:23.511481 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e4af015-72e9-44a9-9944-380db3d717fa-kube-api-access-wv7dg" (OuterVolumeSpecName: "kube-api-access-wv7dg") pod "0e4af015-72e9-44a9-9944-380db3d717fa" (UID: "0e4af015-72e9-44a9-9944-380db3d717fa"). InnerVolumeSpecName "kube-api-access-wv7dg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 15:00:23 crc kubenswrapper[5107]: I1209 15:00:23.554737 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Dec 09 15:00:23 crc kubenswrapper[5107]: I1209 15:00:23.605290 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wv7dg\" (UniqueName: \"kubernetes.io/projected/0e4af015-72e9-44a9-9944-380db3d717fa-kube-api-access-wv7dg\") on node \"crc\" DevicePath \"\"" Dec 09 15:00:23 crc kubenswrapper[5107]: I1209 15:00:23.605323 5107 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0e4af015-72e9-44a9-9944-380db3d717fa-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 15:00:23 crc kubenswrapper[5107]: I1209 15:00:23.605345 5107 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e4af015-72e9-44a9-9944-380db3d717fa-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 15:00:23 crc kubenswrapper[5107]: I1209 15:00:23.629411 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Dec 09 15:00:23 crc kubenswrapper[5107]: I1209 15:00:23.701959 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Dec 09 15:00:23 crc kubenswrapper[5107]: I1209 15:00:23.813176 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Dec 09 15:00:24 crc kubenswrapper[5107]: I1209 15:00:24.063882 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Dec 09 15:00:24 crc kubenswrapper[5107]: I1209 15:00:24.156319 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Dec 09 15:00:24 crc kubenswrapper[5107]: I1209 15:00:24.226482 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421540-mfd9d" Dec 09 15:00:24 crc kubenswrapper[5107]: I1209 15:00:24.226513 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421540-mfd9d" event={"ID":"0e4af015-72e9-44a9-9944-380db3d717fa","Type":"ContainerDied","Data":"8663d2e409dde413b5dc9b4b18325d4e2e4bac919140ecaf2e96e2bb0d0b7748"} Dec 09 15:00:24 crc kubenswrapper[5107]: I1209 15:00:24.226549 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8663d2e409dde413b5dc9b4b18325d4e2e4bac919140ecaf2e96e2bb0d0b7748" Dec 09 15:00:24 crc kubenswrapper[5107]: I1209 15:00:24.247132 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Dec 09 15:00:24 crc kubenswrapper[5107]: I1209 15:00:24.247175 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Dec 09 15:00:24 crc kubenswrapper[5107]: I1209 15:00:24.274556 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Dec 09 15:00:24 crc kubenswrapper[5107]: I1209 15:00:24.608482 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Dec 09 15:00:24 crc kubenswrapper[5107]: I1209 15:00:24.820775 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Dec 09 15:00:24 crc kubenswrapper[5107]: I1209 15:00:24.954234 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Dec 09 15:00:24 crc kubenswrapper[5107]: I1209 15:00:24.958562 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Dec 09 15:00:25 crc kubenswrapper[5107]: I1209 15:00:25.031603 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Dec 09 15:00:25 crc kubenswrapper[5107]: I1209 15:00:25.200144 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 09 15:00:25 crc kubenswrapper[5107]: I1209 15:00:25.218742 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Dec 09 15:00:25 crc kubenswrapper[5107]: I1209 15:00:25.401227 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Dec 09 15:00:25 crc kubenswrapper[5107]: I1209 15:00:25.853120 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Dec 09 15:00:26 crc kubenswrapper[5107]: I1209 15:00:26.271671 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Dec 09 15:00:26 crc kubenswrapper[5107]: I1209 15:00:26.273091 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Dec 09 15:00:26 crc kubenswrapper[5107]: I1209 15:00:26.572645 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Dec 09 15:00:26 crc kubenswrapper[5107]: I1209 15:00:26.734517 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Dec 09 15:00:26 crc kubenswrapper[5107]: I1209 15:00:26.947112 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Dec 09 15:00:26 crc kubenswrapper[5107]: I1209 15:00:26.947279 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 15:00:27 crc kubenswrapper[5107]: I1209 15:00:27.048485 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 09 15:00:27 crc kubenswrapper[5107]: I1209 15:00:27.048572 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 09 15:00:27 crc kubenswrapper[5107]: I1209 15:00:27.048647 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 15:00:27 crc kubenswrapper[5107]: I1209 15:00:27.048671 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 09 15:00:27 crc kubenswrapper[5107]: I1209 15:00:27.048717 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 15:00:27 crc kubenswrapper[5107]: I1209 15:00:27.048721 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 15:00:27 crc kubenswrapper[5107]: I1209 15:00:27.048859 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 09 15:00:27 crc kubenswrapper[5107]: I1209 15:00:27.048937 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 09 15:00:27 crc kubenswrapper[5107]: I1209 15:00:27.049312 5107 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Dec 09 15:00:27 crc kubenswrapper[5107]: I1209 15:00:27.049358 5107 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 09 15:00:27 crc kubenswrapper[5107]: I1209 15:00:27.049367 5107 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Dec 09 15:00:27 crc kubenswrapper[5107]: I1209 15:00:27.049397 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 15:00:27 crc kubenswrapper[5107]: I1209 15:00:27.058068 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 15:00:27 crc kubenswrapper[5107]: I1209 15:00:27.151263 5107 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Dec 09 15:00:27 crc kubenswrapper[5107]: I1209 15:00:27.151317 5107 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 09 15:00:27 crc kubenswrapper[5107]: I1209 15:00:27.250214 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Dec 09 15:00:27 crc kubenswrapper[5107]: I1209 15:00:27.250256 5107 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="09a73ce7d54219acfad888d879b4d923aeccef63c91f02b44f7f8cb7e2582842" exitCode=137 Dec 09 15:00:27 crc kubenswrapper[5107]: I1209 15:00:27.250434 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 15:00:27 crc kubenswrapper[5107]: I1209 15:00:27.251682 5107 scope.go:117] "RemoveContainer" containerID="09a73ce7d54219acfad888d879b4d923aeccef63c91f02b44f7f8cb7e2582842" Dec 09 15:00:27 crc kubenswrapper[5107]: I1209 15:00:27.272084 5107 scope.go:117] "RemoveContainer" containerID="09a73ce7d54219acfad888d879b4d923aeccef63c91f02b44f7f8cb7e2582842" Dec 09 15:00:27 crc kubenswrapper[5107]: E1209 15:00:27.272557 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09a73ce7d54219acfad888d879b4d923aeccef63c91f02b44f7f8cb7e2582842\": container with ID starting with 09a73ce7d54219acfad888d879b4d923aeccef63c91f02b44f7f8cb7e2582842 not found: ID does not exist" containerID="09a73ce7d54219acfad888d879b4d923aeccef63c91f02b44f7f8cb7e2582842" Dec 09 15:00:27 crc kubenswrapper[5107]: I1209 15:00:27.272590 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09a73ce7d54219acfad888d879b4d923aeccef63c91f02b44f7f8cb7e2582842"} err="failed to get container status \"09a73ce7d54219acfad888d879b4d923aeccef63c91f02b44f7f8cb7e2582842\": rpc error: code = NotFound desc = could not find container \"09a73ce7d54219acfad888d879b4d923aeccef63c91f02b44f7f8cb7e2582842\": container with ID starting with 09a73ce7d54219acfad888d879b4d923aeccef63c91f02b44f7f8cb7e2582842 not found: ID does not exist" Dec 09 15:00:28 crc kubenswrapper[5107]: I1209 15:00:28.825301 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Dec 09 15:00:28 crc kubenswrapper[5107]: I1209 15:00:28.825925 5107 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Dec 09 15:00:28 crc kubenswrapper[5107]: I1209 15:00:28.836164 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 09 15:00:28 crc kubenswrapper[5107]: I1209 15:00:28.836198 5107 kubelet.go:2759] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="f82be892-1cef-48e3-93e3-f4fd4de3b7aa" Dec 09 15:00:28 crc kubenswrapper[5107]: I1209 15:00:28.841879 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 09 15:00:28 crc kubenswrapper[5107]: I1209 15:00:28.841916 5107 kubelet.go:2784] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="f82be892-1cef-48e3-93e3-f4fd4de3b7aa" Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.025627 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-2vgtw"] Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.027541 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-2vgtw" podUID="8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b" containerName="controller-manager" containerID="cri-o://176ca282e13d097088a6db305dc4d8c5ad44cfa8ef7e13d68eb4678630c70512" gracePeriod=30 Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.036197 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-9lrlj"] Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.346045 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9lrlj" podUID="cb028dc6-bfe0-4ca9-8e81-4b2a9b954524" containerName="route-controller-manager" containerID="cri-o://695fd746b9a32eb7383bfb21e870b4466853039bce68a7a6615cc4f8f3d455a3" gracePeriod=30 Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.662909 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9lrlj" Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.713577 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d449df854-n52vd"] Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.714218 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.714241 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.714273 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cb028dc6-bfe0-4ca9-8e81-4b2a9b954524" containerName="route-controller-manager" Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.714280 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb028dc6-bfe0-4ca9-8e81-4b2a9b954524" containerName="route-controller-manager" Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.714291 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0e4af015-72e9-44a9-9944-380db3d717fa" containerName="collect-profiles" Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.714296 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e4af015-72e9-44a9-9944-380db3d717fa" containerName="collect-profiles" Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.714419 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="0e4af015-72e9-44a9-9944-380db3d717fa" containerName="collect-profiles" Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.714432 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.714442 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="cb028dc6-bfe0-4ca9-8e81-4b2a9b954524" containerName="route-controller-manager" Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.717512 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d449df854-n52vd"] Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.717617 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d449df854-n52vd" Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.811998 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb028dc6-bfe0-4ca9-8e81-4b2a9b954524-serving-cert\") pod \"cb028dc6-bfe0-4ca9-8e81-4b2a9b954524\" (UID: \"cb028dc6-bfe0-4ca9-8e81-4b2a9b954524\") " Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.812058 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhh5j\" (UniqueName: \"kubernetes.io/projected/cb028dc6-bfe0-4ca9-8e81-4b2a9b954524-kube-api-access-zhh5j\") pod \"cb028dc6-bfe0-4ca9-8e81-4b2a9b954524\" (UID: \"cb028dc6-bfe0-4ca9-8e81-4b2a9b954524\") " Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.812129 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb028dc6-bfe0-4ca9-8e81-4b2a9b954524-client-ca\") pod \"cb028dc6-bfe0-4ca9-8e81-4b2a9b954524\" (UID: \"cb028dc6-bfe0-4ca9-8e81-4b2a9b954524\") " Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.812174 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb028dc6-bfe0-4ca9-8e81-4b2a9b954524-config\") pod \"cb028dc6-bfe0-4ca9-8e81-4b2a9b954524\" (UID: \"cb028dc6-bfe0-4ca9-8e81-4b2a9b954524\") " Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.812270 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cb028dc6-bfe0-4ca9-8e81-4b2a9b954524-tmp\") pod \"cb028dc6-bfe0-4ca9-8e81-4b2a9b954524\" (UID: \"cb028dc6-bfe0-4ca9-8e81-4b2a9b954524\") " Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.812414 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/18c0f1ef-480e-44ca-9483-0d5940828937-client-ca\") pod \"route-controller-manager-7d449df854-n52vd\" (UID: \"18c0f1ef-480e-44ca-9483-0d5940828937\") " pod="openshift-route-controller-manager/route-controller-manager-7d449df854-n52vd" Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.812443 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b7tz\" (UniqueName: \"kubernetes.io/projected/18c0f1ef-480e-44ca-9483-0d5940828937-kube-api-access-2b7tz\") pod \"route-controller-manager-7d449df854-n52vd\" (UID: \"18c0f1ef-480e-44ca-9483-0d5940828937\") " pod="openshift-route-controller-manager/route-controller-manager-7d449df854-n52vd" Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.812468 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18c0f1ef-480e-44ca-9483-0d5940828937-serving-cert\") pod \"route-controller-manager-7d449df854-n52vd\" (UID: \"18c0f1ef-480e-44ca-9483-0d5940828937\") " pod="openshift-route-controller-manager/route-controller-manager-7d449df854-n52vd" Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.812495 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/18c0f1ef-480e-44ca-9483-0d5940828937-tmp\") pod \"route-controller-manager-7d449df854-n52vd\" (UID: \"18c0f1ef-480e-44ca-9483-0d5940828937\") " pod="openshift-route-controller-manager/route-controller-manager-7d449df854-n52vd" Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.812522 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18c0f1ef-480e-44ca-9483-0d5940828937-config\") pod \"route-controller-manager-7d449df854-n52vd\" (UID: \"18c0f1ef-480e-44ca-9483-0d5940828937\") " pod="openshift-route-controller-manager/route-controller-manager-7d449df854-n52vd" Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.812662 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb028dc6-bfe0-4ca9-8e81-4b2a9b954524-client-ca" (OuterVolumeSpecName: "client-ca") pod "cb028dc6-bfe0-4ca9-8e81-4b2a9b954524" (UID: "cb028dc6-bfe0-4ca9-8e81-4b2a9b954524"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.812847 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb028dc6-bfe0-4ca9-8e81-4b2a9b954524-tmp" (OuterVolumeSpecName: "tmp") pod "cb028dc6-bfe0-4ca9-8e81-4b2a9b954524" (UID: "cb028dc6-bfe0-4ca9-8e81-4b2a9b954524"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.813019 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb028dc6-bfe0-4ca9-8e81-4b2a9b954524-config" (OuterVolumeSpecName: "config") pod "cb028dc6-bfe0-4ca9-8e81-4b2a9b954524" (UID: "cb028dc6-bfe0-4ca9-8e81-4b2a9b954524"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.822786 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb028dc6-bfe0-4ca9-8e81-4b2a9b954524-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cb028dc6-bfe0-4ca9-8e81-4b2a9b954524" (UID: "cb028dc6-bfe0-4ca9-8e81-4b2a9b954524"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.823433 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb028dc6-bfe0-4ca9-8e81-4b2a9b954524-kube-api-access-zhh5j" (OuterVolumeSpecName: "kube-api-access-zhh5j") pod "cb028dc6-bfe0-4ca9-8e81-4b2a9b954524" (UID: "cb028dc6-bfe0-4ca9-8e81-4b2a9b954524"). InnerVolumeSpecName "kube-api-access-zhh5j". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.878380 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-2vgtw" Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.909673 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-549445d8b-gx6kw"] Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.910587 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b" containerName="controller-manager" Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.910618 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b" containerName="controller-manager" Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.910761 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b" containerName="controller-manager" Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.914041 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2b7tz\" (UniqueName: \"kubernetes.io/projected/18c0f1ef-480e-44ca-9483-0d5940828937-kube-api-access-2b7tz\") pod \"route-controller-manager-7d449df854-n52vd\" (UID: \"18c0f1ef-480e-44ca-9483-0d5940828937\") " pod="openshift-route-controller-manager/route-controller-manager-7d449df854-n52vd" Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.914123 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18c0f1ef-480e-44ca-9483-0d5940828937-serving-cert\") pod \"route-controller-manager-7d449df854-n52vd\" (UID: \"18c0f1ef-480e-44ca-9483-0d5940828937\") " pod="openshift-route-controller-manager/route-controller-manager-7d449df854-n52vd" Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.914164 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/18c0f1ef-480e-44ca-9483-0d5940828937-tmp\") pod \"route-controller-manager-7d449df854-n52vd\" (UID: \"18c0f1ef-480e-44ca-9483-0d5940828937\") " pod="openshift-route-controller-manager/route-controller-manager-7d449df854-n52vd" Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.914292 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18c0f1ef-480e-44ca-9483-0d5940828937-config\") pod \"route-controller-manager-7d449df854-n52vd\" (UID: \"18c0f1ef-480e-44ca-9483-0d5940828937\") " pod="openshift-route-controller-manager/route-controller-manager-7d449df854-n52vd" Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.914426 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/18c0f1ef-480e-44ca-9483-0d5940828937-client-ca\") pod \"route-controller-manager-7d449df854-n52vd\" (UID: \"18c0f1ef-480e-44ca-9483-0d5940828937\") " pod="openshift-route-controller-manager/route-controller-manager-7d449df854-n52vd" Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.914475 5107 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb028dc6-bfe0-4ca9-8e81-4b2a9b954524-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.914487 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb028dc6-bfe0-4ca9-8e81-4b2a9b954524-config\") on node \"crc\" DevicePath \"\"" Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.914496 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cb028dc6-bfe0-4ca9-8e81-4b2a9b954524-tmp\") on node \"crc\" DevicePath \"\"" Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.914504 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb028dc6-bfe0-4ca9-8e81-4b2a9b954524-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.914514 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zhh5j\" (UniqueName: \"kubernetes.io/projected/cb028dc6-bfe0-4ca9-8e81-4b2a9b954524-kube-api-access-zhh5j\") on node \"crc\" DevicePath \"\"" Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.915453 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/18c0f1ef-480e-44ca-9483-0d5940828937-tmp\") pod \"route-controller-manager-7d449df854-n52vd\" (UID: \"18c0f1ef-480e-44ca-9483-0d5940828937\") " pod="openshift-route-controller-manager/route-controller-manager-7d449df854-n52vd" Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.915577 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/18c0f1ef-480e-44ca-9483-0d5940828937-client-ca\") pod \"route-controller-manager-7d449df854-n52vd\" (UID: \"18c0f1ef-480e-44ca-9483-0d5940828937\") " pod="openshift-route-controller-manager/route-controller-manager-7d449df854-n52vd" Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.917168 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18c0f1ef-480e-44ca-9483-0d5940828937-config\") pod \"route-controller-manager-7d449df854-n52vd\" (UID: \"18c0f1ef-480e-44ca-9483-0d5940828937\") " pod="openshift-route-controller-manager/route-controller-manager-7d449df854-n52vd" Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.920162 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-549445d8b-gx6kw"] Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.920410 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-549445d8b-gx6kw" Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.923945 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18c0f1ef-480e-44ca-9483-0d5940828937-serving-cert\") pod \"route-controller-manager-7d449df854-n52vd\" (UID: \"18c0f1ef-480e-44ca-9483-0d5940828937\") " pod="openshift-route-controller-manager/route-controller-manager-7d449df854-n52vd" Dec 09 15:00:43 crc kubenswrapper[5107]: I1209 15:00:43.934359 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b7tz\" (UniqueName: \"kubernetes.io/projected/18c0f1ef-480e-44ca-9483-0d5940828937-kube-api-access-2b7tz\") pod \"route-controller-manager-7d449df854-n52vd\" (UID: \"18c0f1ef-480e-44ca-9483-0d5940828937\") " pod="openshift-route-controller-manager/route-controller-manager-7d449df854-n52vd" Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.014948 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b-config\") pod \"8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b\" (UID: \"8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b\") " Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.015046 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b-tmp\") pod \"8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b\" (UID: \"8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b\") " Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.015096 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b-serving-cert\") pod \"8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b\" (UID: \"8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b\") " Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.015147 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b-client-ca\") pod \"8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b\" (UID: \"8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b\") " Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.015225 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7m56j\" (UniqueName: \"kubernetes.io/projected/8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b-kube-api-access-7m56j\") pod \"8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b\" (UID: \"8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b\") " Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.015246 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b-proxy-ca-bundles\") pod \"8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b\" (UID: \"8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b\") " Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.015398 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fafd2c8f-32a4-4566-bacb-ff0973d4f158-serving-cert\") pod \"controller-manager-549445d8b-gx6kw\" (UID: \"fafd2c8f-32a4-4566-bacb-ff0973d4f158\") " pod="openshift-controller-manager/controller-manager-549445d8b-gx6kw" Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.015442 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fafd2c8f-32a4-4566-bacb-ff0973d4f158-tmp\") pod \"controller-manager-549445d8b-gx6kw\" (UID: \"fafd2c8f-32a4-4566-bacb-ff0973d4f158\") " pod="openshift-controller-manager/controller-manager-549445d8b-gx6kw" Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.015499 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fafd2c8f-32a4-4566-bacb-ff0973d4f158-config\") pod \"controller-manager-549445d8b-gx6kw\" (UID: \"fafd2c8f-32a4-4566-bacb-ff0973d4f158\") " pod="openshift-controller-manager/controller-manager-549445d8b-gx6kw" Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.016274 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b-tmp" (OuterVolumeSpecName: "tmp") pod "8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b" (UID: "8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.017052 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b-client-ca" (OuterVolumeSpecName: "client-ca") pod "8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b" (UID: "8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.017423 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fafd2c8f-32a4-4566-bacb-ff0973d4f158-proxy-ca-bundles\") pod \"controller-manager-549445d8b-gx6kw\" (UID: \"fafd2c8f-32a4-4566-bacb-ff0973d4f158\") " pod="openshift-controller-manager/controller-manager-549445d8b-gx6kw" Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.017486 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7jrh\" (UniqueName: \"kubernetes.io/projected/fafd2c8f-32a4-4566-bacb-ff0973d4f158-kube-api-access-m7jrh\") pod \"controller-manager-549445d8b-gx6kw\" (UID: \"fafd2c8f-32a4-4566-bacb-ff0973d4f158\") " pod="openshift-controller-manager/controller-manager-549445d8b-gx6kw" Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.017473 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b" (UID: "8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.017521 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fafd2c8f-32a4-4566-bacb-ff0973d4f158-client-ca\") pod \"controller-manager-549445d8b-gx6kw\" (UID: \"fafd2c8f-32a4-4566-bacb-ff0973d4f158\") " pod="openshift-controller-manager/controller-manager-549445d8b-gx6kw" Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.017620 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b-config" (OuterVolumeSpecName: "config") pod "8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b" (UID: "8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.017829 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b-tmp\") on node \"crc\" DevicePath \"\"" Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.017857 5107 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.017870 5107 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.017887 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b-config\") on node \"crc\" DevicePath \"\"" Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.020102 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b" (UID: "8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.021617 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b-kube-api-access-7m56j" (OuterVolumeSpecName: "kube-api-access-7m56j") pod "8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b" (UID: "8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b"). InnerVolumeSpecName "kube-api-access-7m56j". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.031201 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d449df854-n52vd" Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.119256 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fafd2c8f-32a4-4566-bacb-ff0973d4f158-tmp\") pod \"controller-manager-549445d8b-gx6kw\" (UID: \"fafd2c8f-32a4-4566-bacb-ff0973d4f158\") " pod="openshift-controller-manager/controller-manager-549445d8b-gx6kw" Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.119362 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fafd2c8f-32a4-4566-bacb-ff0973d4f158-config\") pod \"controller-manager-549445d8b-gx6kw\" (UID: \"fafd2c8f-32a4-4566-bacb-ff0973d4f158\") " pod="openshift-controller-manager/controller-manager-549445d8b-gx6kw" Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.119399 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fafd2c8f-32a4-4566-bacb-ff0973d4f158-proxy-ca-bundles\") pod \"controller-manager-549445d8b-gx6kw\" (UID: \"fafd2c8f-32a4-4566-bacb-ff0973d4f158\") " pod="openshift-controller-manager/controller-manager-549445d8b-gx6kw" Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.119435 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7jrh\" (UniqueName: \"kubernetes.io/projected/fafd2c8f-32a4-4566-bacb-ff0973d4f158-kube-api-access-m7jrh\") pod \"controller-manager-549445d8b-gx6kw\" (UID: \"fafd2c8f-32a4-4566-bacb-ff0973d4f158\") " pod="openshift-controller-manager/controller-manager-549445d8b-gx6kw" Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.119464 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fafd2c8f-32a4-4566-bacb-ff0973d4f158-client-ca\") pod \"controller-manager-549445d8b-gx6kw\" (UID: \"fafd2c8f-32a4-4566-bacb-ff0973d4f158\") " pod="openshift-controller-manager/controller-manager-549445d8b-gx6kw" Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.119536 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fafd2c8f-32a4-4566-bacb-ff0973d4f158-serving-cert\") pod \"controller-manager-549445d8b-gx6kw\" (UID: \"fafd2c8f-32a4-4566-bacb-ff0973d4f158\") " pod="openshift-controller-manager/controller-manager-549445d8b-gx6kw" Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.119588 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7m56j\" (UniqueName: \"kubernetes.io/projected/8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b-kube-api-access-7m56j\") on node \"crc\" DevicePath \"\"" Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.119604 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.121924 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fafd2c8f-32a4-4566-bacb-ff0973d4f158-tmp\") pod \"controller-manager-549445d8b-gx6kw\" (UID: \"fafd2c8f-32a4-4566-bacb-ff0973d4f158\") " pod="openshift-controller-manager/controller-manager-549445d8b-gx6kw" Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.122255 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fafd2c8f-32a4-4566-bacb-ff0973d4f158-proxy-ca-bundles\") pod \"controller-manager-549445d8b-gx6kw\" (UID: \"fafd2c8f-32a4-4566-bacb-ff0973d4f158\") " pod="openshift-controller-manager/controller-manager-549445d8b-gx6kw" Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.122770 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fafd2c8f-32a4-4566-bacb-ff0973d4f158-client-ca\") pod \"controller-manager-549445d8b-gx6kw\" (UID: \"fafd2c8f-32a4-4566-bacb-ff0973d4f158\") " pod="openshift-controller-manager/controller-manager-549445d8b-gx6kw" Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.123142 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fafd2c8f-32a4-4566-bacb-ff0973d4f158-config\") pod \"controller-manager-549445d8b-gx6kw\" (UID: \"fafd2c8f-32a4-4566-bacb-ff0973d4f158\") " pod="openshift-controller-manager/controller-manager-549445d8b-gx6kw" Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.128936 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fafd2c8f-32a4-4566-bacb-ff0973d4f158-serving-cert\") pod \"controller-manager-549445d8b-gx6kw\" (UID: \"fafd2c8f-32a4-4566-bacb-ff0973d4f158\") " pod="openshift-controller-manager/controller-manager-549445d8b-gx6kw" Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.144386 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7jrh\" (UniqueName: \"kubernetes.io/projected/fafd2c8f-32a4-4566-bacb-ff0973d4f158-kube-api-access-m7jrh\") pod \"controller-manager-549445d8b-gx6kw\" (UID: \"fafd2c8f-32a4-4566-bacb-ff0973d4f158\") " pod="openshift-controller-manager/controller-manager-549445d8b-gx6kw" Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.154517 5107 patch_prober.go:28] interesting pod/machine-config-daemon-9jq8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.154613 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" podUID="902902bc-6dc6-4c5f-8e1b-9399b7c813c7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.154679 5107 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.155546 5107 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c9ff12bb0bcdabee200b9ad2d18d95ed57d2bf2ef0990fb07cb22ecab1d5e617"} pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.155664 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" podUID="902902bc-6dc6-4c5f-8e1b-9399b7c813c7" containerName="machine-config-daemon" containerID="cri-o://c9ff12bb0bcdabee200b9ad2d18d95ed57d2bf2ef0990fb07cb22ecab1d5e617" gracePeriod=600 Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.253985 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-549445d8b-gx6kw" Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.291860 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d449df854-n52vd"] Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.358666 5107 generic.go:358] "Generic (PLEG): container finished" podID="8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b" containerID="176ca282e13d097088a6db305dc4d8c5ad44cfa8ef7e13d68eb4678630c70512" exitCode=0 Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.358815 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-2vgtw" event={"ID":"8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b","Type":"ContainerDied","Data":"176ca282e13d097088a6db305dc4d8c5ad44cfa8ef7e13d68eb4678630c70512"} Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.358838 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-2vgtw" Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.358862 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-2vgtw" event={"ID":"8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b","Type":"ContainerDied","Data":"38456325b4bc014e6be346ec3e133bf7f3fcc1545e51ca6a5d93614e323577ff"} Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.358888 5107 scope.go:117] "RemoveContainer" containerID="176ca282e13d097088a6db305dc4d8c5ad44cfa8ef7e13d68eb4678630c70512" Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.364431 5107 generic.go:358] "Generic (PLEG): container finished" podID="902902bc-6dc6-4c5f-8e1b-9399b7c813c7" containerID="c9ff12bb0bcdabee200b9ad2d18d95ed57d2bf2ef0990fb07cb22ecab1d5e617" exitCode=0 Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.364572 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" event={"ID":"902902bc-6dc6-4c5f-8e1b-9399b7c813c7","Type":"ContainerDied","Data":"c9ff12bb0bcdabee200b9ad2d18d95ed57d2bf2ef0990fb07cb22ecab1d5e617"} Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.365772 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d449df854-n52vd" event={"ID":"18c0f1ef-480e-44ca-9483-0d5940828937","Type":"ContainerStarted","Data":"55335cb29c78b5e1507d9d6f597c8f94836874bf03afa9b26cc01f007f454e5e"} Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.367752 5107 generic.go:358] "Generic (PLEG): container finished" podID="cb028dc6-bfe0-4ca9-8e81-4b2a9b954524" containerID="695fd746b9a32eb7383bfb21e870b4466853039bce68a7a6615cc4f8f3d455a3" exitCode=0 Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.367926 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9lrlj" event={"ID":"cb028dc6-bfe0-4ca9-8e81-4b2a9b954524","Type":"ContainerDied","Data":"695fd746b9a32eb7383bfb21e870b4466853039bce68a7a6615cc4f8f3d455a3"} Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.367955 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9lrlj" event={"ID":"cb028dc6-bfe0-4ca9-8e81-4b2a9b954524","Type":"ContainerDied","Data":"56c05abc256d3c26e533deb782c0c61210b4e1c1cc2f4e250c6059a7db01d309"} Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.368055 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9lrlj" Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.396526 5107 scope.go:117] "RemoveContainer" containerID="176ca282e13d097088a6db305dc4d8c5ad44cfa8ef7e13d68eb4678630c70512" Dec 09 15:00:44 crc kubenswrapper[5107]: E1209 15:00:44.397141 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"176ca282e13d097088a6db305dc4d8c5ad44cfa8ef7e13d68eb4678630c70512\": container with ID starting with 176ca282e13d097088a6db305dc4d8c5ad44cfa8ef7e13d68eb4678630c70512 not found: ID does not exist" containerID="176ca282e13d097088a6db305dc4d8c5ad44cfa8ef7e13d68eb4678630c70512" Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.397231 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"176ca282e13d097088a6db305dc4d8c5ad44cfa8ef7e13d68eb4678630c70512"} err="failed to get container status \"176ca282e13d097088a6db305dc4d8c5ad44cfa8ef7e13d68eb4678630c70512\": rpc error: code = NotFound desc = could not find container \"176ca282e13d097088a6db305dc4d8c5ad44cfa8ef7e13d68eb4678630c70512\": container with ID starting with 176ca282e13d097088a6db305dc4d8c5ad44cfa8ef7e13d68eb4678630c70512 not found: ID does not exist" Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.397368 5107 scope.go:117] "RemoveContainer" containerID="695fd746b9a32eb7383bfb21e870b4466853039bce68a7a6615cc4f8f3d455a3" Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.418699 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-2vgtw"] Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.419520 5107 scope.go:117] "RemoveContainer" containerID="695fd746b9a32eb7383bfb21e870b4466853039bce68a7a6615cc4f8f3d455a3" Dec 09 15:00:44 crc kubenswrapper[5107]: E1209 15:00:44.420109 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"695fd746b9a32eb7383bfb21e870b4466853039bce68a7a6615cc4f8f3d455a3\": container with ID starting with 695fd746b9a32eb7383bfb21e870b4466853039bce68a7a6615cc4f8f3d455a3 not found: ID does not exist" containerID="695fd746b9a32eb7383bfb21e870b4466853039bce68a7a6615cc4f8f3d455a3" Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.420284 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"695fd746b9a32eb7383bfb21e870b4466853039bce68a7a6615cc4f8f3d455a3"} err="failed to get container status \"695fd746b9a32eb7383bfb21e870b4466853039bce68a7a6615cc4f8f3d455a3\": rpc error: code = NotFound desc = could not find container \"695fd746b9a32eb7383bfb21e870b4466853039bce68a7a6615cc4f8f3d455a3\": container with ID starting with 695fd746b9a32eb7383bfb21e870b4466853039bce68a7a6615cc4f8f3d455a3 not found: ID does not exist" Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.423993 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-2vgtw"] Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.434761 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-9lrlj"] Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.443064 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-9lrlj"] Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.459019 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-549445d8b-gx6kw"] Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.825306 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b" path="/var/lib/kubelet/pods/8b8f3d5d-a12e-4845-aecf-253a1fe8cd0b/volumes" Dec 09 15:00:44 crc kubenswrapper[5107]: I1209 15:00:44.826103 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb028dc6-bfe0-4ca9-8e81-4b2a9b954524" path="/var/lib/kubelet/pods/cb028dc6-bfe0-4ca9-8e81-4b2a9b954524/volumes" Dec 09 15:00:45 crc kubenswrapper[5107]: I1209 15:00:45.036749 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d449df854-n52vd"] Dec 09 15:00:45 crc kubenswrapper[5107]: I1209 15:00:45.383541 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-549445d8b-gx6kw" event={"ID":"fafd2c8f-32a4-4566-bacb-ff0973d4f158","Type":"ContainerStarted","Data":"10d80cb0545ab79029b33f6ce14d8d091d9c1842e01d707a5e98dcdbec4aa642"} Dec 09 15:00:45 crc kubenswrapper[5107]: I1209 15:00:45.383608 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-549445d8b-gx6kw" Dec 09 15:00:45 crc kubenswrapper[5107]: I1209 15:00:45.383629 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-549445d8b-gx6kw" event={"ID":"fafd2c8f-32a4-4566-bacb-ff0973d4f158","Type":"ContainerStarted","Data":"b160549a1f1973ced9d66d5a644bfbbfe24fbd4b6e83c1e7bfac7b63502ee9bb"} Dec 09 15:00:45 crc kubenswrapper[5107]: I1209 15:00:45.389248 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" event={"ID":"902902bc-6dc6-4c5f-8e1b-9399b7c813c7","Type":"ContainerStarted","Data":"34519aada4fd9eb2bcaeeb1db4e761259a091e00fc9ecec917ee75d1c4f3c68a"} Dec 09 15:00:45 crc kubenswrapper[5107]: I1209 15:00:45.393674 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d449df854-n52vd" event={"ID":"18c0f1ef-480e-44ca-9483-0d5940828937","Type":"ContainerStarted","Data":"c4a35bead94c37d11177accff168c6dbfdadc188ccf0b28fd2dcdc51b452bf4a"} Dec 09 15:00:45 crc kubenswrapper[5107]: I1209 15:00:45.400226 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-549445d8b-gx6kw" Dec 09 15:00:45 crc kubenswrapper[5107]: I1209 15:00:45.411123 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-549445d8b-gx6kw" podStartSLOduration=2.4110919920000002 podStartE2EDuration="2.411091992s" podCreationTimestamp="2025-12-09 15:00:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 15:00:45.403046874 +0000 UTC m=+293.126751833" watchObservedRunningTime="2025-12-09 15:00:45.411091992 +0000 UTC m=+293.134796891" Dec 09 15:00:45 crc kubenswrapper[5107]: I1209 15:00:45.477117 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7d449df854-n52vd" podStartSLOduration=2.477098658 podStartE2EDuration="2.477098658s" podCreationTimestamp="2025-12-09 15:00:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 15:00:45.47458541 +0000 UTC m=+293.198290299" watchObservedRunningTime="2025-12-09 15:00:45.477098658 +0000 UTC m=+293.200803547" Dec 09 15:00:46 crc kubenswrapper[5107]: I1209 15:00:46.404068 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7d449df854-n52vd" podUID="18c0f1ef-480e-44ca-9483-0d5940828937" containerName="route-controller-manager" containerID="cri-o://c4a35bead94c37d11177accff168c6dbfdadc188ccf0b28fd2dcdc51b452bf4a" gracePeriod=30 Dec 09 15:00:46 crc kubenswrapper[5107]: I1209 15:00:46.404643 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-7d449df854-n52vd" Dec 09 15:00:46 crc kubenswrapper[5107]: I1209 15:00:46.415536 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7d449df854-n52vd" Dec 09 15:00:46 crc kubenswrapper[5107]: I1209 15:00:46.756625 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d449df854-n52vd" Dec 09 15:00:46 crc kubenswrapper[5107]: I1209 15:00:46.797451 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56c564ccdd-6pmbl"] Dec 09 15:00:46 crc kubenswrapper[5107]: I1209 15:00:46.798120 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="18c0f1ef-480e-44ca-9483-0d5940828937" containerName="route-controller-manager" Dec 09 15:00:46 crc kubenswrapper[5107]: I1209 15:00:46.798148 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c0f1ef-480e-44ca-9483-0d5940828937" containerName="route-controller-manager" Dec 09 15:00:46 crc kubenswrapper[5107]: I1209 15:00:46.798304 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="18c0f1ef-480e-44ca-9483-0d5940828937" containerName="route-controller-manager" Dec 09 15:00:46 crc kubenswrapper[5107]: I1209 15:00:46.804669 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56c564ccdd-6pmbl" Dec 09 15:00:46 crc kubenswrapper[5107]: I1209 15:00:46.813088 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56c564ccdd-6pmbl"] Dec 09 15:00:46 crc kubenswrapper[5107]: I1209 15:00:46.860263 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18c0f1ef-480e-44ca-9483-0d5940828937-serving-cert\") pod \"18c0f1ef-480e-44ca-9483-0d5940828937\" (UID: \"18c0f1ef-480e-44ca-9483-0d5940828937\") " Dec 09 15:00:46 crc kubenswrapper[5107]: I1209 15:00:46.860442 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2b7tz\" (UniqueName: \"kubernetes.io/projected/18c0f1ef-480e-44ca-9483-0d5940828937-kube-api-access-2b7tz\") pod \"18c0f1ef-480e-44ca-9483-0d5940828937\" (UID: \"18c0f1ef-480e-44ca-9483-0d5940828937\") " Dec 09 15:00:46 crc kubenswrapper[5107]: I1209 15:00:46.860492 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18c0f1ef-480e-44ca-9483-0d5940828937-config\") pod \"18c0f1ef-480e-44ca-9483-0d5940828937\" (UID: \"18c0f1ef-480e-44ca-9483-0d5940828937\") " Dec 09 15:00:46 crc kubenswrapper[5107]: I1209 15:00:46.860518 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/18c0f1ef-480e-44ca-9483-0d5940828937-client-ca\") pod \"18c0f1ef-480e-44ca-9483-0d5940828937\" (UID: \"18c0f1ef-480e-44ca-9483-0d5940828937\") " Dec 09 15:00:46 crc kubenswrapper[5107]: I1209 15:00:46.860559 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/18c0f1ef-480e-44ca-9483-0d5940828937-tmp\") pod \"18c0f1ef-480e-44ca-9483-0d5940828937\" (UID: \"18c0f1ef-480e-44ca-9483-0d5940828937\") " Dec 09 15:00:46 crc kubenswrapper[5107]: I1209 15:00:46.861221 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18c0f1ef-480e-44ca-9483-0d5940828937-tmp" (OuterVolumeSpecName: "tmp") pod "18c0f1ef-480e-44ca-9483-0d5940828937" (UID: "18c0f1ef-480e-44ca-9483-0d5940828937"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:00:46 crc kubenswrapper[5107]: I1209 15:00:46.861904 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18c0f1ef-480e-44ca-9483-0d5940828937-client-ca" (OuterVolumeSpecName: "client-ca") pod "18c0f1ef-480e-44ca-9483-0d5940828937" (UID: "18c0f1ef-480e-44ca-9483-0d5940828937"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 15:00:46 crc kubenswrapper[5107]: I1209 15:00:46.862503 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18c0f1ef-480e-44ca-9483-0d5940828937-config" (OuterVolumeSpecName: "config") pod "18c0f1ef-480e-44ca-9483-0d5940828937" (UID: "18c0f1ef-480e-44ca-9483-0d5940828937"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 15:00:46 crc kubenswrapper[5107]: I1209 15:00:46.867730 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18c0f1ef-480e-44ca-9483-0d5940828937-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "18c0f1ef-480e-44ca-9483-0d5940828937" (UID: "18c0f1ef-480e-44ca-9483-0d5940828937"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 15:00:46 crc kubenswrapper[5107]: I1209 15:00:46.870966 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18c0f1ef-480e-44ca-9483-0d5940828937-kube-api-access-2b7tz" (OuterVolumeSpecName: "kube-api-access-2b7tz") pod "18c0f1ef-480e-44ca-9483-0d5940828937" (UID: "18c0f1ef-480e-44ca-9483-0d5940828937"). InnerVolumeSpecName "kube-api-access-2b7tz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 15:00:46 crc kubenswrapper[5107]: I1209 15:00:46.965202 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmpq4\" (UniqueName: \"kubernetes.io/projected/f033a772-7dea-42c9-9007-c5164c869c2d-kube-api-access-fmpq4\") pod \"route-controller-manager-56c564ccdd-6pmbl\" (UID: \"f033a772-7dea-42c9-9007-c5164c869c2d\") " pod="openshift-route-controller-manager/route-controller-manager-56c564ccdd-6pmbl" Dec 09 15:00:46 crc kubenswrapper[5107]: I1209 15:00:46.965274 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f033a772-7dea-42c9-9007-c5164c869c2d-tmp\") pod \"route-controller-manager-56c564ccdd-6pmbl\" (UID: \"f033a772-7dea-42c9-9007-c5164c869c2d\") " pod="openshift-route-controller-manager/route-controller-manager-56c564ccdd-6pmbl" Dec 09 15:00:46 crc kubenswrapper[5107]: I1209 15:00:46.965320 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f033a772-7dea-42c9-9007-c5164c869c2d-serving-cert\") pod \"route-controller-manager-56c564ccdd-6pmbl\" (UID: \"f033a772-7dea-42c9-9007-c5164c869c2d\") " pod="openshift-route-controller-manager/route-controller-manager-56c564ccdd-6pmbl" Dec 09 15:00:46 crc kubenswrapper[5107]: I1209 15:00:46.965382 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f033a772-7dea-42c9-9007-c5164c869c2d-config\") pod \"route-controller-manager-56c564ccdd-6pmbl\" (UID: \"f033a772-7dea-42c9-9007-c5164c869c2d\") " pod="openshift-route-controller-manager/route-controller-manager-56c564ccdd-6pmbl" Dec 09 15:00:46 crc kubenswrapper[5107]: I1209 15:00:46.965425 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f033a772-7dea-42c9-9007-c5164c869c2d-client-ca\") pod \"route-controller-manager-56c564ccdd-6pmbl\" (UID: \"f033a772-7dea-42c9-9007-c5164c869c2d\") " pod="openshift-route-controller-manager/route-controller-manager-56c564ccdd-6pmbl" Dec 09 15:00:46 crc kubenswrapper[5107]: I1209 15:00:46.965467 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18c0f1ef-480e-44ca-9483-0d5940828937-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 15:00:46 crc kubenswrapper[5107]: I1209 15:00:46.965478 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2b7tz\" (UniqueName: \"kubernetes.io/projected/18c0f1ef-480e-44ca-9483-0d5940828937-kube-api-access-2b7tz\") on node \"crc\" DevicePath \"\"" Dec 09 15:00:46 crc kubenswrapper[5107]: I1209 15:00:46.965491 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18c0f1ef-480e-44ca-9483-0d5940828937-config\") on node \"crc\" DevicePath \"\"" Dec 09 15:00:46 crc kubenswrapper[5107]: I1209 15:00:46.965499 5107 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/18c0f1ef-480e-44ca-9483-0d5940828937-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 15:00:46 crc kubenswrapper[5107]: I1209 15:00:46.965508 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/18c0f1ef-480e-44ca-9483-0d5940828937-tmp\") on node \"crc\" DevicePath \"\"" Dec 09 15:00:47 crc kubenswrapper[5107]: I1209 15:00:47.066671 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fmpq4\" (UniqueName: \"kubernetes.io/projected/f033a772-7dea-42c9-9007-c5164c869c2d-kube-api-access-fmpq4\") pod \"route-controller-manager-56c564ccdd-6pmbl\" (UID: \"f033a772-7dea-42c9-9007-c5164c869c2d\") " pod="openshift-route-controller-manager/route-controller-manager-56c564ccdd-6pmbl" Dec 09 15:00:47 crc kubenswrapper[5107]: I1209 15:00:47.066726 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f033a772-7dea-42c9-9007-c5164c869c2d-tmp\") pod \"route-controller-manager-56c564ccdd-6pmbl\" (UID: \"f033a772-7dea-42c9-9007-c5164c869c2d\") " pod="openshift-route-controller-manager/route-controller-manager-56c564ccdd-6pmbl" Dec 09 15:00:47 crc kubenswrapper[5107]: I1209 15:00:47.066757 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f033a772-7dea-42c9-9007-c5164c869c2d-serving-cert\") pod \"route-controller-manager-56c564ccdd-6pmbl\" (UID: \"f033a772-7dea-42c9-9007-c5164c869c2d\") " pod="openshift-route-controller-manager/route-controller-manager-56c564ccdd-6pmbl" Dec 09 15:00:47 crc kubenswrapper[5107]: I1209 15:00:47.066791 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f033a772-7dea-42c9-9007-c5164c869c2d-config\") pod \"route-controller-manager-56c564ccdd-6pmbl\" (UID: \"f033a772-7dea-42c9-9007-c5164c869c2d\") " pod="openshift-route-controller-manager/route-controller-manager-56c564ccdd-6pmbl" Dec 09 15:00:47 crc kubenswrapper[5107]: I1209 15:00:47.066819 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f033a772-7dea-42c9-9007-c5164c869c2d-client-ca\") pod \"route-controller-manager-56c564ccdd-6pmbl\" (UID: \"f033a772-7dea-42c9-9007-c5164c869c2d\") " pod="openshift-route-controller-manager/route-controller-manager-56c564ccdd-6pmbl" Dec 09 15:00:47 crc kubenswrapper[5107]: I1209 15:00:47.067750 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f033a772-7dea-42c9-9007-c5164c869c2d-tmp\") pod \"route-controller-manager-56c564ccdd-6pmbl\" (UID: \"f033a772-7dea-42c9-9007-c5164c869c2d\") " pod="openshift-route-controller-manager/route-controller-manager-56c564ccdd-6pmbl" Dec 09 15:00:47 crc kubenswrapper[5107]: I1209 15:00:47.068431 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f033a772-7dea-42c9-9007-c5164c869c2d-client-ca\") pod \"route-controller-manager-56c564ccdd-6pmbl\" (UID: \"f033a772-7dea-42c9-9007-c5164c869c2d\") " pod="openshift-route-controller-manager/route-controller-manager-56c564ccdd-6pmbl" Dec 09 15:00:47 crc kubenswrapper[5107]: I1209 15:00:47.068546 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f033a772-7dea-42c9-9007-c5164c869c2d-config\") pod \"route-controller-manager-56c564ccdd-6pmbl\" (UID: \"f033a772-7dea-42c9-9007-c5164c869c2d\") " pod="openshift-route-controller-manager/route-controller-manager-56c564ccdd-6pmbl" Dec 09 15:00:47 crc kubenswrapper[5107]: I1209 15:00:47.071210 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f033a772-7dea-42c9-9007-c5164c869c2d-serving-cert\") pod \"route-controller-manager-56c564ccdd-6pmbl\" (UID: \"f033a772-7dea-42c9-9007-c5164c869c2d\") " pod="openshift-route-controller-manager/route-controller-manager-56c564ccdd-6pmbl" Dec 09 15:00:47 crc kubenswrapper[5107]: I1209 15:00:47.083487 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmpq4\" (UniqueName: \"kubernetes.io/projected/f033a772-7dea-42c9-9007-c5164c869c2d-kube-api-access-fmpq4\") pod \"route-controller-manager-56c564ccdd-6pmbl\" (UID: \"f033a772-7dea-42c9-9007-c5164c869c2d\") " pod="openshift-route-controller-manager/route-controller-manager-56c564ccdd-6pmbl" Dec 09 15:00:47 crc kubenswrapper[5107]: I1209 15:00:47.123139 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56c564ccdd-6pmbl" Dec 09 15:00:47 crc kubenswrapper[5107]: I1209 15:00:47.315239 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56c564ccdd-6pmbl"] Dec 09 15:00:47 crc kubenswrapper[5107]: W1209 15:00:47.318600 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf033a772_7dea_42c9_9007_c5164c869c2d.slice/crio-72b01b261e3768160393feb3ee40c3856fd5ce164c953839bb07102cd7d4a551 WatchSource:0}: Error finding container 72b01b261e3768160393feb3ee40c3856fd5ce164c953839bb07102cd7d4a551: Status 404 returned error can't find the container with id 72b01b261e3768160393feb3ee40c3856fd5ce164c953839bb07102cd7d4a551 Dec 09 15:00:47 crc kubenswrapper[5107]: I1209 15:00:47.409015 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56c564ccdd-6pmbl" event={"ID":"f033a772-7dea-42c9-9007-c5164c869c2d","Type":"ContainerStarted","Data":"72b01b261e3768160393feb3ee40c3856fd5ce164c953839bb07102cd7d4a551"} Dec 09 15:00:47 crc kubenswrapper[5107]: I1209 15:00:47.410476 5107 generic.go:358] "Generic (PLEG): container finished" podID="18c0f1ef-480e-44ca-9483-0d5940828937" containerID="c4a35bead94c37d11177accff168c6dbfdadc188ccf0b28fd2dcdc51b452bf4a" exitCode=0 Dec 09 15:00:47 crc kubenswrapper[5107]: I1209 15:00:47.410724 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d449df854-n52vd" event={"ID":"18c0f1ef-480e-44ca-9483-0d5940828937","Type":"ContainerDied","Data":"c4a35bead94c37d11177accff168c6dbfdadc188ccf0b28fd2dcdc51b452bf4a"} Dec 09 15:00:47 crc kubenswrapper[5107]: I1209 15:00:47.410754 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d449df854-n52vd" event={"ID":"18c0f1ef-480e-44ca-9483-0d5940828937","Type":"ContainerDied","Data":"55335cb29c78b5e1507d9d6f597c8f94836874bf03afa9b26cc01f007f454e5e"} Dec 09 15:00:47 crc kubenswrapper[5107]: I1209 15:00:47.410771 5107 scope.go:117] "RemoveContainer" containerID="c4a35bead94c37d11177accff168c6dbfdadc188ccf0b28fd2dcdc51b452bf4a" Dec 09 15:00:47 crc kubenswrapper[5107]: I1209 15:00:47.411257 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d449df854-n52vd" Dec 09 15:00:47 crc kubenswrapper[5107]: I1209 15:00:47.431677 5107 scope.go:117] "RemoveContainer" containerID="c4a35bead94c37d11177accff168c6dbfdadc188ccf0b28fd2dcdc51b452bf4a" Dec 09 15:00:47 crc kubenswrapper[5107]: E1209 15:00:47.432101 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4a35bead94c37d11177accff168c6dbfdadc188ccf0b28fd2dcdc51b452bf4a\": container with ID starting with c4a35bead94c37d11177accff168c6dbfdadc188ccf0b28fd2dcdc51b452bf4a not found: ID does not exist" containerID="c4a35bead94c37d11177accff168c6dbfdadc188ccf0b28fd2dcdc51b452bf4a" Dec 09 15:00:47 crc kubenswrapper[5107]: I1209 15:00:47.432143 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4a35bead94c37d11177accff168c6dbfdadc188ccf0b28fd2dcdc51b452bf4a"} err="failed to get container status \"c4a35bead94c37d11177accff168c6dbfdadc188ccf0b28fd2dcdc51b452bf4a\": rpc error: code = NotFound desc = could not find container \"c4a35bead94c37d11177accff168c6dbfdadc188ccf0b28fd2dcdc51b452bf4a\": container with ID starting with c4a35bead94c37d11177accff168c6dbfdadc188ccf0b28fd2dcdc51b452bf4a not found: ID does not exist" Dec 09 15:00:47 crc kubenswrapper[5107]: I1209 15:00:47.444351 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d449df854-n52vd"] Dec 09 15:00:47 crc kubenswrapper[5107]: I1209 15:00:47.450290 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d449df854-n52vd"] Dec 09 15:00:48 crc kubenswrapper[5107]: I1209 15:00:48.417934 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56c564ccdd-6pmbl" event={"ID":"f033a772-7dea-42c9-9007-c5164c869c2d","Type":"ContainerStarted","Data":"f8430b75b841c2e9bdc209d1762c4584c23f7e248d6078af578defbb74483467"} Dec 09 15:00:48 crc kubenswrapper[5107]: I1209 15:00:48.419153 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-56c564ccdd-6pmbl" Dec 09 15:00:48 crc kubenswrapper[5107]: I1209 15:00:48.424190 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-56c564ccdd-6pmbl" Dec 09 15:00:48 crc kubenswrapper[5107]: I1209 15:00:48.436263 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-56c564ccdd-6pmbl" podStartSLOduration=3.436244789 podStartE2EDuration="3.436244789s" podCreationTimestamp="2025-12-09 15:00:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 15:00:48.432435605 +0000 UTC m=+296.156140504" watchObservedRunningTime="2025-12-09 15:00:48.436244789 +0000 UTC m=+296.159949688" Dec 09 15:00:48 crc kubenswrapper[5107]: I1209 15:00:48.824735 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18c0f1ef-480e-44ca-9483-0d5940828937" path="/var/lib/kubelet/pods/18c0f1ef-480e-44ca-9483-0d5940828937/volumes" Dec 09 15:00:50 crc kubenswrapper[5107]: I1209 15:00:50.183669 5107 ???:1] "http: TLS handshake error from 192.168.126.11:48586: no serving certificate available for the kubelet" Dec 09 15:00:53 crc kubenswrapper[5107]: I1209 15:00:53.019248 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 09 15:00:53 crc kubenswrapper[5107]: I1209 15:00:53.022149 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 09 15:01:02 crc kubenswrapper[5107]: I1209 15:01:02.420007 5107 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.009618 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-549445d8b-gx6kw"] Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.010514 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-549445d8b-gx6kw" podUID="fafd2c8f-32a4-4566-bacb-ff0973d4f158" containerName="controller-manager" containerID="cri-o://10d80cb0545ab79029b33f6ce14d8d091d9c1842e01d707a5e98dcdbec4aa642" gracePeriod=30 Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.481369 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-549445d8b-gx6kw" Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.510288 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6d6cd4dd85-4bllc"] Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.511011 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fafd2c8f-32a4-4566-bacb-ff0973d4f158" containerName="controller-manager" Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.511037 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="fafd2c8f-32a4-4566-bacb-ff0973d4f158" containerName="controller-manager" Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.511162 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="fafd2c8f-32a4-4566-bacb-ff0973d4f158" containerName="controller-manager" Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.515693 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d6cd4dd85-4bllc" Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.525290 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6d6cd4dd85-4bllc"] Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.572654 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fafd2c8f-32a4-4566-bacb-ff0973d4f158-tmp\") pod \"fafd2c8f-32a4-4566-bacb-ff0973d4f158\" (UID: \"fafd2c8f-32a4-4566-bacb-ff0973d4f158\") " Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.572704 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fafd2c8f-32a4-4566-bacb-ff0973d4f158-client-ca\") pod \"fafd2c8f-32a4-4566-bacb-ff0973d4f158\" (UID: \"fafd2c8f-32a4-4566-bacb-ff0973d4f158\") " Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.572775 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fafd2c8f-32a4-4566-bacb-ff0973d4f158-serving-cert\") pod \"fafd2c8f-32a4-4566-bacb-ff0973d4f158\" (UID: \"fafd2c8f-32a4-4566-bacb-ff0973d4f158\") " Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.572799 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fafd2c8f-32a4-4566-bacb-ff0973d4f158-config\") pod \"fafd2c8f-32a4-4566-bacb-ff0973d4f158\" (UID: \"fafd2c8f-32a4-4566-bacb-ff0973d4f158\") " Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.572881 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7jrh\" (UniqueName: \"kubernetes.io/projected/fafd2c8f-32a4-4566-bacb-ff0973d4f158-kube-api-access-m7jrh\") pod \"fafd2c8f-32a4-4566-bacb-ff0973d4f158\" (UID: \"fafd2c8f-32a4-4566-bacb-ff0973d4f158\") " Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.572939 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fafd2c8f-32a4-4566-bacb-ff0973d4f158-proxy-ca-bundles\") pod \"fafd2c8f-32a4-4566-bacb-ff0973d4f158\" (UID: \"fafd2c8f-32a4-4566-bacb-ff0973d4f158\") " Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.573166 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fafd2c8f-32a4-4566-bacb-ff0973d4f158-tmp" (OuterVolumeSpecName: "tmp") pod "fafd2c8f-32a4-4566-bacb-ff0973d4f158" (UID: "fafd2c8f-32a4-4566-bacb-ff0973d4f158"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.573536 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fafd2c8f-32a4-4566-bacb-ff0973d4f158-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "fafd2c8f-32a4-4566-bacb-ff0973d4f158" (UID: "fafd2c8f-32a4-4566-bacb-ff0973d4f158"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.573580 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fafd2c8f-32a4-4566-bacb-ff0973d4f158-config" (OuterVolumeSpecName: "config") pod "fafd2c8f-32a4-4566-bacb-ff0973d4f158" (UID: "fafd2c8f-32a4-4566-bacb-ff0973d4f158"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.573798 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fafd2c8f-32a4-4566-bacb-ff0973d4f158-client-ca" (OuterVolumeSpecName: "client-ca") pod "fafd2c8f-32a4-4566-bacb-ff0973d4f158" (UID: "fafd2c8f-32a4-4566-bacb-ff0973d4f158"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.578642 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fafd2c8f-32a4-4566-bacb-ff0973d4f158-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fafd2c8f-32a4-4566-bacb-ff0973d4f158" (UID: "fafd2c8f-32a4-4566-bacb-ff0973d4f158"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.581488 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fafd2c8f-32a4-4566-bacb-ff0973d4f158-kube-api-access-m7jrh" (OuterVolumeSpecName: "kube-api-access-m7jrh") pod "fafd2c8f-32a4-4566-bacb-ff0973d4f158" (UID: "fafd2c8f-32a4-4566-bacb-ff0973d4f158"). InnerVolumeSpecName "kube-api-access-m7jrh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.674819 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a468074a-3b33-41b3-a4a3-fd03158f1d07-config\") pod \"controller-manager-6d6cd4dd85-4bllc\" (UID: \"a468074a-3b33-41b3-a4a3-fd03158f1d07\") " pod="openshift-controller-manager/controller-manager-6d6cd4dd85-4bllc" Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.674926 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a468074a-3b33-41b3-a4a3-fd03158f1d07-serving-cert\") pod \"controller-manager-6d6cd4dd85-4bllc\" (UID: \"a468074a-3b33-41b3-a4a3-fd03158f1d07\") " pod="openshift-controller-manager/controller-manager-6d6cd4dd85-4bllc" Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.675118 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a468074a-3b33-41b3-a4a3-fd03158f1d07-tmp\") pod \"controller-manager-6d6cd4dd85-4bllc\" (UID: \"a468074a-3b33-41b3-a4a3-fd03158f1d07\") " pod="openshift-controller-manager/controller-manager-6d6cd4dd85-4bllc" Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.675197 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb2kw\" (UniqueName: \"kubernetes.io/projected/a468074a-3b33-41b3-a4a3-fd03158f1d07-kube-api-access-rb2kw\") pod \"controller-manager-6d6cd4dd85-4bllc\" (UID: \"a468074a-3b33-41b3-a4a3-fd03158f1d07\") " pod="openshift-controller-manager/controller-manager-6d6cd4dd85-4bllc" Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.675266 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a468074a-3b33-41b3-a4a3-fd03158f1d07-proxy-ca-bundles\") pod \"controller-manager-6d6cd4dd85-4bllc\" (UID: \"a468074a-3b33-41b3-a4a3-fd03158f1d07\") " pod="openshift-controller-manager/controller-manager-6d6cd4dd85-4bllc" Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.675375 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a468074a-3b33-41b3-a4a3-fd03158f1d07-client-ca\") pod \"controller-manager-6d6cd4dd85-4bllc\" (UID: \"a468074a-3b33-41b3-a4a3-fd03158f1d07\") " pod="openshift-controller-manager/controller-manager-6d6cd4dd85-4bllc" Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.675464 5107 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fafd2c8f-32a4-4566-bacb-ff0973d4f158-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.675493 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fafd2c8f-32a4-4566-bacb-ff0973d4f158-tmp\") on node \"crc\" DevicePath \"\"" Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.675505 5107 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fafd2c8f-32a4-4566-bacb-ff0973d4f158-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.675518 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fafd2c8f-32a4-4566-bacb-ff0973d4f158-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.675529 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fafd2c8f-32a4-4566-bacb-ff0973d4f158-config\") on node \"crc\" DevicePath \"\"" Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.675539 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m7jrh\" (UniqueName: \"kubernetes.io/projected/fafd2c8f-32a4-4566-bacb-ff0973d4f158-kube-api-access-m7jrh\") on node \"crc\" DevicePath \"\"" Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.776917 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a468074a-3b33-41b3-a4a3-fd03158f1d07-serving-cert\") pod \"controller-manager-6d6cd4dd85-4bllc\" (UID: \"a468074a-3b33-41b3-a4a3-fd03158f1d07\") " pod="openshift-controller-manager/controller-manager-6d6cd4dd85-4bllc" Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.776981 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a468074a-3b33-41b3-a4a3-fd03158f1d07-tmp\") pod \"controller-manager-6d6cd4dd85-4bllc\" (UID: \"a468074a-3b33-41b3-a4a3-fd03158f1d07\") " pod="openshift-controller-manager/controller-manager-6d6cd4dd85-4bllc" Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.777001 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rb2kw\" (UniqueName: \"kubernetes.io/projected/a468074a-3b33-41b3-a4a3-fd03158f1d07-kube-api-access-rb2kw\") pod \"controller-manager-6d6cd4dd85-4bllc\" (UID: \"a468074a-3b33-41b3-a4a3-fd03158f1d07\") " pod="openshift-controller-manager/controller-manager-6d6cd4dd85-4bllc" Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.777021 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a468074a-3b33-41b3-a4a3-fd03158f1d07-proxy-ca-bundles\") pod \"controller-manager-6d6cd4dd85-4bllc\" (UID: \"a468074a-3b33-41b3-a4a3-fd03158f1d07\") " pod="openshift-controller-manager/controller-manager-6d6cd4dd85-4bllc" Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.777145 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a468074a-3b33-41b3-a4a3-fd03158f1d07-client-ca\") pod \"controller-manager-6d6cd4dd85-4bllc\" (UID: \"a468074a-3b33-41b3-a4a3-fd03158f1d07\") " pod="openshift-controller-manager/controller-manager-6d6cd4dd85-4bllc" Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.777217 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a468074a-3b33-41b3-a4a3-fd03158f1d07-config\") pod \"controller-manager-6d6cd4dd85-4bllc\" (UID: \"a468074a-3b33-41b3-a4a3-fd03158f1d07\") " pod="openshift-controller-manager/controller-manager-6d6cd4dd85-4bllc" Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.777833 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a468074a-3b33-41b3-a4a3-fd03158f1d07-tmp\") pod \"controller-manager-6d6cd4dd85-4bllc\" (UID: \"a468074a-3b33-41b3-a4a3-fd03158f1d07\") " pod="openshift-controller-manager/controller-manager-6d6cd4dd85-4bllc" Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.778183 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a468074a-3b33-41b3-a4a3-fd03158f1d07-client-ca\") pod \"controller-manager-6d6cd4dd85-4bllc\" (UID: \"a468074a-3b33-41b3-a4a3-fd03158f1d07\") " pod="openshift-controller-manager/controller-manager-6d6cd4dd85-4bllc" Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.778244 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a468074a-3b33-41b3-a4a3-fd03158f1d07-proxy-ca-bundles\") pod \"controller-manager-6d6cd4dd85-4bllc\" (UID: \"a468074a-3b33-41b3-a4a3-fd03158f1d07\") " pod="openshift-controller-manager/controller-manager-6d6cd4dd85-4bllc" Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.778653 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a468074a-3b33-41b3-a4a3-fd03158f1d07-config\") pod \"controller-manager-6d6cd4dd85-4bllc\" (UID: \"a468074a-3b33-41b3-a4a3-fd03158f1d07\") " pod="openshift-controller-manager/controller-manager-6d6cd4dd85-4bllc" Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.781093 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a468074a-3b33-41b3-a4a3-fd03158f1d07-serving-cert\") pod \"controller-manager-6d6cd4dd85-4bllc\" (UID: \"a468074a-3b33-41b3-a4a3-fd03158f1d07\") " pod="openshift-controller-manager/controller-manager-6d6cd4dd85-4bllc" Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.805812 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb2kw\" (UniqueName: \"kubernetes.io/projected/a468074a-3b33-41b3-a4a3-fd03158f1d07-kube-api-access-rb2kw\") pod \"controller-manager-6d6cd4dd85-4bllc\" (UID: \"a468074a-3b33-41b3-a4a3-fd03158f1d07\") " pod="openshift-controller-manager/controller-manager-6d6cd4dd85-4bllc" Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.840680 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d6cd4dd85-4bllc" Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.907351 5107 generic.go:358] "Generic (PLEG): container finished" podID="fafd2c8f-32a4-4566-bacb-ff0973d4f158" containerID="10d80cb0545ab79029b33f6ce14d8d091d9c1842e01d707a5e98dcdbec4aa642" exitCode=0 Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.907426 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-549445d8b-gx6kw" Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.907456 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-549445d8b-gx6kw" event={"ID":"fafd2c8f-32a4-4566-bacb-ff0973d4f158","Type":"ContainerDied","Data":"10d80cb0545ab79029b33f6ce14d8d091d9c1842e01d707a5e98dcdbec4aa642"} Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.907532 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-549445d8b-gx6kw" event={"ID":"fafd2c8f-32a4-4566-bacb-ff0973d4f158","Type":"ContainerDied","Data":"b160549a1f1973ced9d66d5a644bfbbfe24fbd4b6e83c1e7bfac7b63502ee9bb"} Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.907555 5107 scope.go:117] "RemoveContainer" containerID="10d80cb0545ab79029b33f6ce14d8d091d9c1842e01d707a5e98dcdbec4aa642" Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.932829 5107 scope.go:117] "RemoveContainer" containerID="10d80cb0545ab79029b33f6ce14d8d091d9c1842e01d707a5e98dcdbec4aa642" Dec 09 15:01:23 crc kubenswrapper[5107]: E1209 15:01:23.933794 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10d80cb0545ab79029b33f6ce14d8d091d9c1842e01d707a5e98dcdbec4aa642\": container with ID starting with 10d80cb0545ab79029b33f6ce14d8d091d9c1842e01d707a5e98dcdbec4aa642 not found: ID does not exist" containerID="10d80cb0545ab79029b33f6ce14d8d091d9c1842e01d707a5e98dcdbec4aa642" Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.933833 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10d80cb0545ab79029b33f6ce14d8d091d9c1842e01d707a5e98dcdbec4aa642"} err="failed to get container status \"10d80cb0545ab79029b33f6ce14d8d091d9c1842e01d707a5e98dcdbec4aa642\": rpc error: code = NotFound desc = could not find container \"10d80cb0545ab79029b33f6ce14d8d091d9c1842e01d707a5e98dcdbec4aa642\": container with ID starting with 10d80cb0545ab79029b33f6ce14d8d091d9c1842e01d707a5e98dcdbec4aa642 not found: ID does not exist" Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.936590 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-549445d8b-gx6kw"] Dec 09 15:01:23 crc kubenswrapper[5107]: I1209 15:01:23.940091 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-549445d8b-gx6kw"] Dec 09 15:01:24 crc kubenswrapper[5107]: I1209 15:01:24.272176 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6d6cd4dd85-4bllc"] Dec 09 15:01:24 crc kubenswrapper[5107]: I1209 15:01:24.280018 5107 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 15:01:24 crc kubenswrapper[5107]: I1209 15:01:24.826163 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fafd2c8f-32a4-4566-bacb-ff0973d4f158" path="/var/lib/kubelet/pods/fafd2c8f-32a4-4566-bacb-ff0973d4f158/volumes" Dec 09 15:01:24 crc kubenswrapper[5107]: I1209 15:01:24.915266 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d6cd4dd85-4bllc" event={"ID":"a468074a-3b33-41b3-a4a3-fd03158f1d07","Type":"ContainerStarted","Data":"6adfa0f3f3ac2f8efe6849fee89438a46f2cf6e734c785774106aa814cefc095"} Dec 09 15:01:24 crc kubenswrapper[5107]: I1209 15:01:24.915367 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d6cd4dd85-4bllc" event={"ID":"a468074a-3b33-41b3-a4a3-fd03158f1d07","Type":"ContainerStarted","Data":"c2824f61826bea0d8f3b2b17b1ae06153d247e06ed77d51a6b2d6d5fc1f6354b"} Dec 09 15:01:24 crc kubenswrapper[5107]: I1209 15:01:24.915402 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-6d6cd4dd85-4bllc" Dec 09 15:01:24 crc kubenswrapper[5107]: I1209 15:01:24.943142 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6d6cd4dd85-4bllc" podStartSLOduration=1.943117629 podStartE2EDuration="1.943117629s" podCreationTimestamp="2025-12-09 15:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 15:01:24.940582478 +0000 UTC m=+332.664287377" watchObservedRunningTime="2025-12-09 15:01:24.943117629 +0000 UTC m=+332.666822518" Dec 09 15:01:25 crc kubenswrapper[5107]: I1209 15:01:25.134433 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6d6cd4dd85-4bllc" Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.327293 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z7hcq"] Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.328513 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z7hcq" podUID="21f1c435-27a8-4463-97da-af76d49f0e7a" containerName="registry-server" containerID="cri-o://495543fe863bfda45b89a6d4b2bbcffd88d5d2e42b87a669527026f197fffcf2" gracePeriod=30 Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.345161 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vmk4n"] Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.345455 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vmk4n" podUID="6243a4ba-2331-4ff6-8d02-0d7cda7bb73e" containerName="registry-server" containerID="cri-o://d898669a4dd47b98f443572974b15338d8bd7e33f3ebe18620fc58015b5a776c" gracePeriod=30 Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.353877 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-fnsxn"] Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.354219 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-fnsxn" podUID="f50117dc-dbba-4bb9-9335-fc47f0b9ad48" containerName="marketplace-operator" containerID="cri-o://ebf82396d3947c538249f487e1f1199f24ce34c0184a9c55ccb4ccc19cf80836" gracePeriod=30 Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.361086 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rnpmv"] Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.361387 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rnpmv" podUID="80fe473c-479a-4083-88ed-ff9ec66558b9" containerName="registry-server" containerID="cri-o://e02597c9c9ae72b43109fb537dd6fa2210b59c80f0dfcad26e8e4d126ddbce6a" gracePeriod=30 Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.368555 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-qrlqn"] Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.388381 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-qrlqn"] Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.388619 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jx4fv"] Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.388867 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-qrlqn" Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.389589 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jx4fv" podUID="b45380af-d55c-4f77-9385-8218e990c675" containerName="registry-server" containerID="cri-o://8f797238b5601d0b64b4614d7d2908c9423e76069c71eb008fc3316b70d1b9cc" gracePeriod=30 Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.463853 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3678953d-f760-4bb6-8d2e-af5be96ba795-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-qrlqn\" (UID: \"3678953d-f760-4bb6-8d2e-af5be96ba795\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qrlqn" Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.463916 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3678953d-f760-4bb6-8d2e-af5be96ba795-tmp\") pod \"marketplace-operator-547dbd544d-qrlqn\" (UID: \"3678953d-f760-4bb6-8d2e-af5be96ba795\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qrlqn" Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.463939 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4g77\" (UniqueName: \"kubernetes.io/projected/3678953d-f760-4bb6-8d2e-af5be96ba795-kube-api-access-k4g77\") pod \"marketplace-operator-547dbd544d-qrlqn\" (UID: \"3678953d-f760-4bb6-8d2e-af5be96ba795\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qrlqn" Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.463960 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3678953d-f760-4bb6-8d2e-af5be96ba795-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-qrlqn\" (UID: \"3678953d-f760-4bb6-8d2e-af5be96ba795\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qrlqn" Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.565046 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3678953d-f760-4bb6-8d2e-af5be96ba795-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-qrlqn\" (UID: \"3678953d-f760-4bb6-8d2e-af5be96ba795\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qrlqn" Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.565133 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3678953d-f760-4bb6-8d2e-af5be96ba795-tmp\") pod \"marketplace-operator-547dbd544d-qrlqn\" (UID: \"3678953d-f760-4bb6-8d2e-af5be96ba795\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qrlqn" Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.565163 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k4g77\" (UniqueName: \"kubernetes.io/projected/3678953d-f760-4bb6-8d2e-af5be96ba795-kube-api-access-k4g77\") pod \"marketplace-operator-547dbd544d-qrlqn\" (UID: \"3678953d-f760-4bb6-8d2e-af5be96ba795\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qrlqn" Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.565185 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3678953d-f760-4bb6-8d2e-af5be96ba795-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-qrlqn\" (UID: \"3678953d-f760-4bb6-8d2e-af5be96ba795\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qrlqn" Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.566653 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3678953d-f760-4bb6-8d2e-af5be96ba795-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-qrlqn\" (UID: \"3678953d-f760-4bb6-8d2e-af5be96ba795\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qrlqn" Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.566983 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3678953d-f760-4bb6-8d2e-af5be96ba795-tmp\") pod \"marketplace-operator-547dbd544d-qrlqn\" (UID: \"3678953d-f760-4bb6-8d2e-af5be96ba795\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qrlqn" Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.587265 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3678953d-f760-4bb6-8d2e-af5be96ba795-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-qrlqn\" (UID: \"3678953d-f760-4bb6-8d2e-af5be96ba795\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qrlqn" Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.587294 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4g77\" (UniqueName: \"kubernetes.io/projected/3678953d-f760-4bb6-8d2e-af5be96ba795-kube-api-access-k4g77\") pod \"marketplace-operator-547dbd544d-qrlqn\" (UID: \"3678953d-f760-4bb6-8d2e-af5be96ba795\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qrlqn" Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.805057 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-qrlqn" Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.809599 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z7hcq" Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.869277 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21f1c435-27a8-4463-97da-af76d49f0e7a-catalog-content\") pod \"21f1c435-27a8-4463-97da-af76d49f0e7a\" (UID: \"21f1c435-27a8-4463-97da-af76d49f0e7a\") " Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.869353 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5xgm\" (UniqueName: \"kubernetes.io/projected/21f1c435-27a8-4463-97da-af76d49f0e7a-kube-api-access-t5xgm\") pod \"21f1c435-27a8-4463-97da-af76d49f0e7a\" (UID: \"21f1c435-27a8-4463-97da-af76d49f0e7a\") " Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.869387 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21f1c435-27a8-4463-97da-af76d49f0e7a-utilities\") pod \"21f1c435-27a8-4463-97da-af76d49f0e7a\" (UID: \"21f1c435-27a8-4463-97da-af76d49f0e7a\") " Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.872994 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21f1c435-27a8-4463-97da-af76d49f0e7a-utilities" (OuterVolumeSpecName: "utilities") pod "21f1c435-27a8-4463-97da-af76d49f0e7a" (UID: "21f1c435-27a8-4463-97da-af76d49f0e7a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.878423 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21f1c435-27a8-4463-97da-af76d49f0e7a-kube-api-access-t5xgm" (OuterVolumeSpecName: "kube-api-access-t5xgm") pod "21f1c435-27a8-4463-97da-af76d49f0e7a" (UID: "21f1c435-27a8-4463-97da-af76d49f0e7a"). InnerVolumeSpecName "kube-api-access-t5xgm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.879728 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-fnsxn" Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.918653 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21f1c435-27a8-4463-97da-af76d49f0e7a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "21f1c435-27a8-4463-97da-af76d49f0e7a" (UID: "21f1c435-27a8-4463-97da-af76d49f0e7a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.926436 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rnpmv" Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.972200 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f50117dc-dbba-4bb9-9335-fc47f0b9ad48-marketplace-trusted-ca\") pod \"f50117dc-dbba-4bb9-9335-fc47f0b9ad48\" (UID: \"f50117dc-dbba-4bb9-9335-fc47f0b9ad48\") " Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.972279 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f50117dc-dbba-4bb9-9335-fc47f0b9ad48-marketplace-operator-metrics\") pod \"f50117dc-dbba-4bb9-9335-fc47f0b9ad48\" (UID: \"f50117dc-dbba-4bb9-9335-fc47f0b9ad48\") " Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.972345 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80fe473c-479a-4083-88ed-ff9ec66558b9-catalog-content\") pod \"80fe473c-479a-4083-88ed-ff9ec66558b9\" (UID: \"80fe473c-479a-4083-88ed-ff9ec66558b9\") " Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.972370 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sgtc\" (UniqueName: \"kubernetes.io/projected/f50117dc-dbba-4bb9-9335-fc47f0b9ad48-kube-api-access-6sgtc\") pod \"f50117dc-dbba-4bb9-9335-fc47f0b9ad48\" (UID: \"f50117dc-dbba-4bb9-9335-fc47f0b9ad48\") " Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.972413 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzrlf\" (UniqueName: \"kubernetes.io/projected/80fe473c-479a-4083-88ed-ff9ec66558b9-kube-api-access-nzrlf\") pod \"80fe473c-479a-4083-88ed-ff9ec66558b9\" (UID: \"80fe473c-479a-4083-88ed-ff9ec66558b9\") " Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.972444 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80fe473c-479a-4083-88ed-ff9ec66558b9-utilities\") pod \"80fe473c-479a-4083-88ed-ff9ec66558b9\" (UID: \"80fe473c-479a-4083-88ed-ff9ec66558b9\") " Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.972462 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f50117dc-dbba-4bb9-9335-fc47f0b9ad48-tmp\") pod \"f50117dc-dbba-4bb9-9335-fc47f0b9ad48\" (UID: \"f50117dc-dbba-4bb9-9335-fc47f0b9ad48\") " Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.972642 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21f1c435-27a8-4463-97da-af76d49f0e7a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.972656 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t5xgm\" (UniqueName: \"kubernetes.io/projected/21f1c435-27a8-4463-97da-af76d49f0e7a-kube-api-access-t5xgm\") on node \"crc\" DevicePath \"\"" Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.972669 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21f1c435-27a8-4463-97da-af76d49f0e7a-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.972913 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f50117dc-dbba-4bb9-9335-fc47f0b9ad48-tmp" (OuterVolumeSpecName: "tmp") pod "f50117dc-dbba-4bb9-9335-fc47f0b9ad48" (UID: "f50117dc-dbba-4bb9-9335-fc47f0b9ad48"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.973537 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f50117dc-dbba-4bb9-9335-fc47f0b9ad48-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "f50117dc-dbba-4bb9-9335-fc47f0b9ad48" (UID: "f50117dc-dbba-4bb9-9335-fc47f0b9ad48"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.973691 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80fe473c-479a-4083-88ed-ff9ec66558b9-utilities" (OuterVolumeSpecName: "utilities") pod "80fe473c-479a-4083-88ed-ff9ec66558b9" (UID: "80fe473c-479a-4083-88ed-ff9ec66558b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.976577 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80fe473c-479a-4083-88ed-ff9ec66558b9-kube-api-access-nzrlf" (OuterVolumeSpecName: "kube-api-access-nzrlf") pod "80fe473c-479a-4083-88ed-ff9ec66558b9" (UID: "80fe473c-479a-4083-88ed-ff9ec66558b9"). InnerVolumeSpecName "kube-api-access-nzrlf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.979435 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f50117dc-dbba-4bb9-9335-fc47f0b9ad48-kube-api-access-6sgtc" (OuterVolumeSpecName: "kube-api-access-6sgtc") pod "f50117dc-dbba-4bb9-9335-fc47f0b9ad48" (UID: "f50117dc-dbba-4bb9-9335-fc47f0b9ad48"). InnerVolumeSpecName "kube-api-access-6sgtc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.981166 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f50117dc-dbba-4bb9-9335-fc47f0b9ad48-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "f50117dc-dbba-4bb9-9335-fc47f0b9ad48" (UID: "f50117dc-dbba-4bb9-9335-fc47f0b9ad48"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 15:01:46 crc kubenswrapper[5107]: I1209 15:01:46.997581 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80fe473c-479a-4083-88ed-ff9ec66558b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "80fe473c-479a-4083-88ed-ff9ec66558b9" (UID: "80fe473c-479a-4083-88ed-ff9ec66558b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.069073 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jx4fv" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.074227 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80fe473c-479a-4083-88ed-ff9ec66558b9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.074259 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6sgtc\" (UniqueName: \"kubernetes.io/projected/f50117dc-dbba-4bb9-9335-fc47f0b9ad48-kube-api-access-6sgtc\") on node \"crc\" DevicePath \"\"" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.074271 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nzrlf\" (UniqueName: \"kubernetes.io/projected/80fe473c-479a-4083-88ed-ff9ec66558b9-kube-api-access-nzrlf\") on node \"crc\" DevicePath \"\"" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.074284 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80fe473c-479a-4083-88ed-ff9ec66558b9-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.074294 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f50117dc-dbba-4bb9-9335-fc47f0b9ad48-tmp\") on node \"crc\" DevicePath \"\"" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.074304 5107 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f50117dc-dbba-4bb9-9335-fc47f0b9ad48-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.074314 5107 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f50117dc-dbba-4bb9-9335-fc47f0b9ad48-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.087555 5107 generic.go:358] "Generic (PLEG): container finished" podID="f50117dc-dbba-4bb9-9335-fc47f0b9ad48" containerID="ebf82396d3947c538249f487e1f1199f24ce34c0184a9c55ccb4ccc19cf80836" exitCode=0 Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.087662 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-fnsxn" event={"ID":"f50117dc-dbba-4bb9-9335-fc47f0b9ad48","Type":"ContainerDied","Data":"ebf82396d3947c538249f487e1f1199f24ce34c0184a9c55ccb4ccc19cf80836"} Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.087690 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-fnsxn" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.087729 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-fnsxn" event={"ID":"f50117dc-dbba-4bb9-9335-fc47f0b9ad48","Type":"ContainerDied","Data":"950bb64ae6590b6586cdf816404cda4ab1b25a3d8a51d34f2f03caa6360455ae"} Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.087755 5107 scope.go:117] "RemoveContainer" containerID="ebf82396d3947c538249f487e1f1199f24ce34c0184a9c55ccb4ccc19cf80836" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.093193 5107 generic.go:358] "Generic (PLEG): container finished" podID="80fe473c-479a-4083-88ed-ff9ec66558b9" containerID="e02597c9c9ae72b43109fb537dd6fa2210b59c80f0dfcad26e8e4d126ddbce6a" exitCode=0 Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.093280 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnpmv" event={"ID":"80fe473c-479a-4083-88ed-ff9ec66558b9","Type":"ContainerDied","Data":"e02597c9c9ae72b43109fb537dd6fa2210b59c80f0dfcad26e8e4d126ddbce6a"} Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.093291 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rnpmv" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.093306 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnpmv" event={"ID":"80fe473c-479a-4083-88ed-ff9ec66558b9","Type":"ContainerDied","Data":"6ea8c7806dfeb14762af39a00d1846b08ff086a2af9e0135cbf1f0250bc003e4"} Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.100866 5107 generic.go:358] "Generic (PLEG): container finished" podID="6243a4ba-2331-4ff6-8d02-0d7cda7bb73e" containerID="d898669a4dd47b98f443572974b15338d8bd7e33f3ebe18620fc58015b5a776c" exitCode=0 Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.101060 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmk4n" event={"ID":"6243a4ba-2331-4ff6-8d02-0d7cda7bb73e","Type":"ContainerDied","Data":"d898669a4dd47b98f443572974b15338d8bd7e33f3ebe18620fc58015b5a776c"} Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.109180 5107 scope.go:117] "RemoveContainer" containerID="ebf82396d3947c538249f487e1f1199f24ce34c0184a9c55ccb4ccc19cf80836" Dec 09 15:01:47 crc kubenswrapper[5107]: E1209 15:01:47.110781 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebf82396d3947c538249f487e1f1199f24ce34c0184a9c55ccb4ccc19cf80836\": container with ID starting with ebf82396d3947c538249f487e1f1199f24ce34c0184a9c55ccb4ccc19cf80836 not found: ID does not exist" containerID="ebf82396d3947c538249f487e1f1199f24ce34c0184a9c55ccb4ccc19cf80836" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.111011 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebf82396d3947c538249f487e1f1199f24ce34c0184a9c55ccb4ccc19cf80836"} err="failed to get container status \"ebf82396d3947c538249f487e1f1199f24ce34c0184a9c55ccb4ccc19cf80836\": rpc error: code = NotFound desc = could not find container \"ebf82396d3947c538249f487e1f1199f24ce34c0184a9c55ccb4ccc19cf80836\": container with ID starting with ebf82396d3947c538249f487e1f1199f24ce34c0184a9c55ccb4ccc19cf80836 not found: ID does not exist" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.111151 5107 scope.go:117] "RemoveContainer" containerID="e02597c9c9ae72b43109fb537dd6fa2210b59c80f0dfcad26e8e4d126ddbce6a" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.111931 5107 generic.go:358] "Generic (PLEG): container finished" podID="21f1c435-27a8-4463-97da-af76d49f0e7a" containerID="495543fe863bfda45b89a6d4b2bbcffd88d5d2e42b87a669527026f197fffcf2" exitCode=0 Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.112106 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vmk4n" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.112113 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z7hcq" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.112305 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7hcq" event={"ID":"21f1c435-27a8-4463-97da-af76d49f0e7a","Type":"ContainerDied","Data":"495543fe863bfda45b89a6d4b2bbcffd88d5d2e42b87a669527026f197fffcf2"} Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.112526 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7hcq" event={"ID":"21f1c435-27a8-4463-97da-af76d49f0e7a","Type":"ContainerDied","Data":"b0a0fbacf1c4b72b697f0f49471229eaa298dafdc8cd53808193c561b5884eff"} Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.118825 5107 generic.go:358] "Generic (PLEG): container finished" podID="b45380af-d55c-4f77-9385-8218e990c675" containerID="8f797238b5601d0b64b4614d7d2908c9423e76069c71eb008fc3316b70d1b9cc" exitCode=0 Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.118936 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jx4fv" event={"ID":"b45380af-d55c-4f77-9385-8218e990c675","Type":"ContainerDied","Data":"8f797238b5601d0b64b4614d7d2908c9423e76069c71eb008fc3316b70d1b9cc"} Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.118968 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jx4fv" event={"ID":"b45380af-d55c-4f77-9385-8218e990c675","Type":"ContainerDied","Data":"f74a888f137fb6b32f9a0f5726094b05a78e89b174ae9949b613f1df90592054"} Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.119045 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jx4fv" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.139281 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-fnsxn"] Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.149254 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-fnsxn"] Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.160951 5107 scope.go:117] "RemoveContainer" containerID="b88f8460101fc73127c9e0cfa7000a1ad577466288123cc947b60c7a21a73ac2" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.174790 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b45380af-d55c-4f77-9385-8218e990c675-utilities\") pod \"b45380af-d55c-4f77-9385-8218e990c675\" (UID: \"b45380af-d55c-4f77-9385-8218e990c675\") " Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.174861 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6243a4ba-2331-4ff6-8d02-0d7cda7bb73e-utilities\") pod \"6243a4ba-2331-4ff6-8d02-0d7cda7bb73e\" (UID: \"6243a4ba-2331-4ff6-8d02-0d7cda7bb73e\") " Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.174899 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wmxq\" (UniqueName: \"kubernetes.io/projected/b45380af-d55c-4f77-9385-8218e990c675-kube-api-access-2wmxq\") pod \"b45380af-d55c-4f77-9385-8218e990c675\" (UID: \"b45380af-d55c-4f77-9385-8218e990c675\") " Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.174973 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-954zm\" (UniqueName: \"kubernetes.io/projected/6243a4ba-2331-4ff6-8d02-0d7cda7bb73e-kube-api-access-954zm\") pod \"6243a4ba-2331-4ff6-8d02-0d7cda7bb73e\" (UID: \"6243a4ba-2331-4ff6-8d02-0d7cda7bb73e\") " Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.175003 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6243a4ba-2331-4ff6-8d02-0d7cda7bb73e-catalog-content\") pod \"6243a4ba-2331-4ff6-8d02-0d7cda7bb73e\" (UID: \"6243a4ba-2331-4ff6-8d02-0d7cda7bb73e\") " Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.175107 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b45380af-d55c-4f77-9385-8218e990c675-catalog-content\") pod \"b45380af-d55c-4f77-9385-8218e990c675\" (UID: \"b45380af-d55c-4f77-9385-8218e990c675\") " Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.176272 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6243a4ba-2331-4ff6-8d02-0d7cda7bb73e-utilities" (OuterVolumeSpecName: "utilities") pod "6243a4ba-2331-4ff6-8d02-0d7cda7bb73e" (UID: "6243a4ba-2331-4ff6-8d02-0d7cda7bb73e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.176588 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b45380af-d55c-4f77-9385-8218e990c675-utilities" (OuterVolumeSpecName: "utilities") pod "b45380af-d55c-4f77-9385-8218e990c675" (UID: "b45380af-d55c-4f77-9385-8218e990c675"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.178773 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b45380af-d55c-4f77-9385-8218e990c675-kube-api-access-2wmxq" (OuterVolumeSpecName: "kube-api-access-2wmxq") pod "b45380af-d55c-4f77-9385-8218e990c675" (UID: "b45380af-d55c-4f77-9385-8218e990c675"). InnerVolumeSpecName "kube-api-access-2wmxq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.179630 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rnpmv"] Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.182123 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6243a4ba-2331-4ff6-8d02-0d7cda7bb73e-kube-api-access-954zm" (OuterVolumeSpecName: "kube-api-access-954zm") pod "6243a4ba-2331-4ff6-8d02-0d7cda7bb73e" (UID: "6243a4ba-2331-4ff6-8d02-0d7cda7bb73e"). InnerVolumeSpecName "kube-api-access-954zm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.197889 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rnpmv"] Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.200059 5107 scope.go:117] "RemoveContainer" containerID="28d29735f4a80fc49b5aef4aec61efb3944a5a40819fe4930cad87eede77db14" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.204405 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z7hcq"] Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.208734 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z7hcq"] Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.214315 5107 scope.go:117] "RemoveContainer" containerID="e02597c9c9ae72b43109fb537dd6fa2210b59c80f0dfcad26e8e4d126ddbce6a" Dec 09 15:01:47 crc kubenswrapper[5107]: E1209 15:01:47.214760 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e02597c9c9ae72b43109fb537dd6fa2210b59c80f0dfcad26e8e4d126ddbce6a\": container with ID starting with e02597c9c9ae72b43109fb537dd6fa2210b59c80f0dfcad26e8e4d126ddbce6a not found: ID does not exist" containerID="e02597c9c9ae72b43109fb537dd6fa2210b59c80f0dfcad26e8e4d126ddbce6a" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.214877 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e02597c9c9ae72b43109fb537dd6fa2210b59c80f0dfcad26e8e4d126ddbce6a"} err="failed to get container status \"e02597c9c9ae72b43109fb537dd6fa2210b59c80f0dfcad26e8e4d126ddbce6a\": rpc error: code = NotFound desc = could not find container \"e02597c9c9ae72b43109fb537dd6fa2210b59c80f0dfcad26e8e4d126ddbce6a\": container with ID starting with e02597c9c9ae72b43109fb537dd6fa2210b59c80f0dfcad26e8e4d126ddbce6a not found: ID does not exist" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.215016 5107 scope.go:117] "RemoveContainer" containerID="b88f8460101fc73127c9e0cfa7000a1ad577466288123cc947b60c7a21a73ac2" Dec 09 15:01:47 crc kubenswrapper[5107]: E1209 15:01:47.215467 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b88f8460101fc73127c9e0cfa7000a1ad577466288123cc947b60c7a21a73ac2\": container with ID starting with b88f8460101fc73127c9e0cfa7000a1ad577466288123cc947b60c7a21a73ac2 not found: ID does not exist" containerID="b88f8460101fc73127c9e0cfa7000a1ad577466288123cc947b60c7a21a73ac2" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.215559 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b88f8460101fc73127c9e0cfa7000a1ad577466288123cc947b60c7a21a73ac2"} err="failed to get container status \"b88f8460101fc73127c9e0cfa7000a1ad577466288123cc947b60c7a21a73ac2\": rpc error: code = NotFound desc = could not find container \"b88f8460101fc73127c9e0cfa7000a1ad577466288123cc947b60c7a21a73ac2\": container with ID starting with b88f8460101fc73127c9e0cfa7000a1ad577466288123cc947b60c7a21a73ac2 not found: ID does not exist" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.215647 5107 scope.go:117] "RemoveContainer" containerID="28d29735f4a80fc49b5aef4aec61efb3944a5a40819fe4930cad87eede77db14" Dec 09 15:01:47 crc kubenswrapper[5107]: E1209 15:01:47.215969 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28d29735f4a80fc49b5aef4aec61efb3944a5a40819fe4930cad87eede77db14\": container with ID starting with 28d29735f4a80fc49b5aef4aec61efb3944a5a40819fe4930cad87eede77db14 not found: ID does not exist" containerID="28d29735f4a80fc49b5aef4aec61efb3944a5a40819fe4930cad87eede77db14" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.216070 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28d29735f4a80fc49b5aef4aec61efb3944a5a40819fe4930cad87eede77db14"} err="failed to get container status \"28d29735f4a80fc49b5aef4aec61efb3944a5a40819fe4930cad87eede77db14\": rpc error: code = NotFound desc = could not find container \"28d29735f4a80fc49b5aef4aec61efb3944a5a40819fe4930cad87eede77db14\": container with ID starting with 28d29735f4a80fc49b5aef4aec61efb3944a5a40819fe4930cad87eede77db14 not found: ID does not exist" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.216160 5107 scope.go:117] "RemoveContainer" containerID="495543fe863bfda45b89a6d4b2bbcffd88d5d2e42b87a669527026f197fffcf2" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.230323 5107 scope.go:117] "RemoveContainer" containerID="da380b4b3518805fe07530fd879e16663290090c1093dd26824c643acc237b23" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.235582 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6243a4ba-2331-4ff6-8d02-0d7cda7bb73e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6243a4ba-2331-4ff6-8d02-0d7cda7bb73e" (UID: "6243a4ba-2331-4ff6-8d02-0d7cda7bb73e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.244360 5107 scope.go:117] "RemoveContainer" containerID="9698641728ad8bda53e168d371e54d5da3d64f9d61c10eeddf5663f960d4837a" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.259400 5107 scope.go:117] "RemoveContainer" containerID="495543fe863bfda45b89a6d4b2bbcffd88d5d2e42b87a669527026f197fffcf2" Dec 09 15:01:47 crc kubenswrapper[5107]: E1209 15:01:47.259692 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"495543fe863bfda45b89a6d4b2bbcffd88d5d2e42b87a669527026f197fffcf2\": container with ID starting with 495543fe863bfda45b89a6d4b2bbcffd88d5d2e42b87a669527026f197fffcf2 not found: ID does not exist" containerID="495543fe863bfda45b89a6d4b2bbcffd88d5d2e42b87a669527026f197fffcf2" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.259721 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"495543fe863bfda45b89a6d4b2bbcffd88d5d2e42b87a669527026f197fffcf2"} err="failed to get container status \"495543fe863bfda45b89a6d4b2bbcffd88d5d2e42b87a669527026f197fffcf2\": rpc error: code = NotFound desc = could not find container \"495543fe863bfda45b89a6d4b2bbcffd88d5d2e42b87a669527026f197fffcf2\": container with ID starting with 495543fe863bfda45b89a6d4b2bbcffd88d5d2e42b87a669527026f197fffcf2 not found: ID does not exist" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.259738 5107 scope.go:117] "RemoveContainer" containerID="da380b4b3518805fe07530fd879e16663290090c1093dd26824c643acc237b23" Dec 09 15:01:47 crc kubenswrapper[5107]: E1209 15:01:47.259962 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da380b4b3518805fe07530fd879e16663290090c1093dd26824c643acc237b23\": container with ID starting with da380b4b3518805fe07530fd879e16663290090c1093dd26824c643acc237b23 not found: ID does not exist" containerID="da380b4b3518805fe07530fd879e16663290090c1093dd26824c643acc237b23" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.259982 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da380b4b3518805fe07530fd879e16663290090c1093dd26824c643acc237b23"} err="failed to get container status \"da380b4b3518805fe07530fd879e16663290090c1093dd26824c643acc237b23\": rpc error: code = NotFound desc = could not find container \"da380b4b3518805fe07530fd879e16663290090c1093dd26824c643acc237b23\": container with ID starting with da380b4b3518805fe07530fd879e16663290090c1093dd26824c643acc237b23 not found: ID does not exist" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.259994 5107 scope.go:117] "RemoveContainer" containerID="9698641728ad8bda53e168d371e54d5da3d64f9d61c10eeddf5663f960d4837a" Dec 09 15:01:47 crc kubenswrapper[5107]: E1209 15:01:47.260151 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9698641728ad8bda53e168d371e54d5da3d64f9d61c10eeddf5663f960d4837a\": container with ID starting with 9698641728ad8bda53e168d371e54d5da3d64f9d61c10eeddf5663f960d4837a not found: ID does not exist" containerID="9698641728ad8bda53e168d371e54d5da3d64f9d61c10eeddf5663f960d4837a" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.260168 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9698641728ad8bda53e168d371e54d5da3d64f9d61c10eeddf5663f960d4837a"} err="failed to get container status \"9698641728ad8bda53e168d371e54d5da3d64f9d61c10eeddf5663f960d4837a\": rpc error: code = NotFound desc = could not find container \"9698641728ad8bda53e168d371e54d5da3d64f9d61c10eeddf5663f960d4837a\": container with ID starting with 9698641728ad8bda53e168d371e54d5da3d64f9d61c10eeddf5663f960d4837a not found: ID does not exist" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.260179 5107 scope.go:117] "RemoveContainer" containerID="8f797238b5601d0b64b4614d7d2908c9423e76069c71eb008fc3316b70d1b9cc" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.274505 5107 scope.go:117] "RemoveContainer" containerID="c29fcc40242a72f4a8eda598797655fd69094fc05c871993b7b0a3bb9da2f74d" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.276219 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b45380af-d55c-4f77-9385-8218e990c675-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.276272 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6243a4ba-2331-4ff6-8d02-0d7cda7bb73e-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.276287 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2wmxq\" (UniqueName: \"kubernetes.io/projected/b45380af-d55c-4f77-9385-8218e990c675-kube-api-access-2wmxq\") on node \"crc\" DevicePath \"\"" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.276299 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-954zm\" (UniqueName: \"kubernetes.io/projected/6243a4ba-2331-4ff6-8d02-0d7cda7bb73e-kube-api-access-954zm\") on node \"crc\" DevicePath \"\"" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.276310 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6243a4ba-2331-4ff6-8d02-0d7cda7bb73e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.289628 5107 scope.go:117] "RemoveContainer" containerID="6fbf6ea75464bac7155312f51d1c0f0ea1b0f4dbbb784b0625b26ed3c077500f" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.293124 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b45380af-d55c-4f77-9385-8218e990c675-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b45380af-d55c-4f77-9385-8218e990c675" (UID: "b45380af-d55c-4f77-9385-8218e990c675"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.321060 5107 scope.go:117] "RemoveContainer" containerID="8f797238b5601d0b64b4614d7d2908c9423e76069c71eb008fc3316b70d1b9cc" Dec 09 15:01:47 crc kubenswrapper[5107]: E1209 15:01:47.322103 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f797238b5601d0b64b4614d7d2908c9423e76069c71eb008fc3316b70d1b9cc\": container with ID starting with 8f797238b5601d0b64b4614d7d2908c9423e76069c71eb008fc3316b70d1b9cc not found: ID does not exist" containerID="8f797238b5601d0b64b4614d7d2908c9423e76069c71eb008fc3316b70d1b9cc" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.322172 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f797238b5601d0b64b4614d7d2908c9423e76069c71eb008fc3316b70d1b9cc"} err="failed to get container status \"8f797238b5601d0b64b4614d7d2908c9423e76069c71eb008fc3316b70d1b9cc\": rpc error: code = NotFound desc = could not find container \"8f797238b5601d0b64b4614d7d2908c9423e76069c71eb008fc3316b70d1b9cc\": container with ID starting with 8f797238b5601d0b64b4614d7d2908c9423e76069c71eb008fc3316b70d1b9cc not found: ID does not exist" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.322215 5107 scope.go:117] "RemoveContainer" containerID="c29fcc40242a72f4a8eda598797655fd69094fc05c871993b7b0a3bb9da2f74d" Dec 09 15:01:47 crc kubenswrapper[5107]: E1209 15:01:47.322636 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c29fcc40242a72f4a8eda598797655fd69094fc05c871993b7b0a3bb9da2f74d\": container with ID starting with c29fcc40242a72f4a8eda598797655fd69094fc05c871993b7b0a3bb9da2f74d not found: ID does not exist" containerID="c29fcc40242a72f4a8eda598797655fd69094fc05c871993b7b0a3bb9da2f74d" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.322677 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c29fcc40242a72f4a8eda598797655fd69094fc05c871993b7b0a3bb9da2f74d"} err="failed to get container status \"c29fcc40242a72f4a8eda598797655fd69094fc05c871993b7b0a3bb9da2f74d\": rpc error: code = NotFound desc = could not find container \"c29fcc40242a72f4a8eda598797655fd69094fc05c871993b7b0a3bb9da2f74d\": container with ID starting with c29fcc40242a72f4a8eda598797655fd69094fc05c871993b7b0a3bb9da2f74d not found: ID does not exist" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.322716 5107 scope.go:117] "RemoveContainer" containerID="6fbf6ea75464bac7155312f51d1c0f0ea1b0f4dbbb784b0625b26ed3c077500f" Dec 09 15:01:47 crc kubenswrapper[5107]: E1209 15:01:47.323042 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fbf6ea75464bac7155312f51d1c0f0ea1b0f4dbbb784b0625b26ed3c077500f\": container with ID starting with 6fbf6ea75464bac7155312f51d1c0f0ea1b0f4dbbb784b0625b26ed3c077500f not found: ID does not exist" containerID="6fbf6ea75464bac7155312f51d1c0f0ea1b0f4dbbb784b0625b26ed3c077500f" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.323090 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fbf6ea75464bac7155312f51d1c0f0ea1b0f4dbbb784b0625b26ed3c077500f"} err="failed to get container status \"6fbf6ea75464bac7155312f51d1c0f0ea1b0f4dbbb784b0625b26ed3c077500f\": rpc error: code = NotFound desc = could not find container \"6fbf6ea75464bac7155312f51d1c0f0ea1b0f4dbbb784b0625b26ed3c077500f\": container with ID starting with 6fbf6ea75464bac7155312f51d1c0f0ea1b0f4dbbb784b0625b26ed3c077500f not found: ID does not exist" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.323668 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-qrlqn"] Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.377697 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b45380af-d55c-4f77-9385-8218e990c675-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.448925 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jx4fv"] Dec 09 15:01:47 crc kubenswrapper[5107]: I1209 15:01:47.454083 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jx4fv"] Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.132198 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-qrlqn" event={"ID":"3678953d-f760-4bb6-8d2e-af5be96ba795","Type":"ContainerStarted","Data":"e36ae6698983e7d97b0550c406fbc7ee44773ed29620dab19b3dffe92564fd5e"} Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.132704 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-qrlqn" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.132722 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-qrlqn" event={"ID":"3678953d-f760-4bb6-8d2e-af5be96ba795","Type":"ContainerStarted","Data":"2efcdc3d2b12a0f64ee2673d67014e3646546455521f5eca0fbf147249ca32ef"} Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.137527 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmk4n" event={"ID":"6243a4ba-2331-4ff6-8d02-0d7cda7bb73e","Type":"ContainerDied","Data":"b31685e92d5bb1563a869a3a1eca0bce59aa9c493c34400f4eb15b5dce6d8f14"} Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.137619 5107 scope.go:117] "RemoveContainer" containerID="d898669a4dd47b98f443572974b15338d8bd7e33f3ebe18620fc58015b5a776c" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.137631 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vmk4n" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.139375 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-qrlqn" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.159961 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-qrlqn" podStartSLOduration=2.159937185 podStartE2EDuration="2.159937185s" podCreationTimestamp="2025-12-09 15:01:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 15:01:48.152182967 +0000 UTC m=+355.875887856" watchObservedRunningTime="2025-12-09 15:01:48.159937185 +0000 UTC m=+355.883642084" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.165669 5107 scope.go:117] "RemoveContainer" containerID="4470f97758135863ddf0c0d1dd2d807a42a078968ba74caa0b619f81bf15463e" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.192939 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vmk4n"] Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.197222 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vmk4n"] Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.205746 5107 scope.go:117] "RemoveContainer" containerID="d9cae4e4fd9600c5b3de074e6c9ed71d51a0fe2fd77ef014627f4f500e515148" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.545925 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-q6jch"] Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.547448 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f50117dc-dbba-4bb9-9335-fc47f0b9ad48" containerName="marketplace-operator" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.547480 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="f50117dc-dbba-4bb9-9335-fc47f0b9ad48" containerName="marketplace-operator" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.547496 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="80fe473c-479a-4083-88ed-ff9ec66558b9" containerName="extract-content" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.547503 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="80fe473c-479a-4083-88ed-ff9ec66558b9" containerName="extract-content" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.547515 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="80fe473c-479a-4083-88ed-ff9ec66558b9" containerName="extract-utilities" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.547522 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="80fe473c-479a-4083-88ed-ff9ec66558b9" containerName="extract-utilities" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.547533 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b45380af-d55c-4f77-9385-8218e990c675" containerName="extract-content" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.547539 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45380af-d55c-4f77-9385-8218e990c675" containerName="extract-content" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.547559 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b45380af-d55c-4f77-9385-8218e990c675" containerName="registry-server" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.547566 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45380af-d55c-4f77-9385-8218e990c675" containerName="registry-server" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.547574 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6243a4ba-2331-4ff6-8d02-0d7cda7bb73e" containerName="registry-server" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.547581 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="6243a4ba-2331-4ff6-8d02-0d7cda7bb73e" containerName="registry-server" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.547591 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6243a4ba-2331-4ff6-8d02-0d7cda7bb73e" containerName="extract-utilities" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.547599 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="6243a4ba-2331-4ff6-8d02-0d7cda7bb73e" containerName="extract-utilities" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.547611 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6243a4ba-2331-4ff6-8d02-0d7cda7bb73e" containerName="extract-content" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.547618 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="6243a4ba-2331-4ff6-8d02-0d7cda7bb73e" containerName="extract-content" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.547630 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="21f1c435-27a8-4463-97da-af76d49f0e7a" containerName="registry-server" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.547636 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="21f1c435-27a8-4463-97da-af76d49f0e7a" containerName="registry-server" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.547646 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="21f1c435-27a8-4463-97da-af76d49f0e7a" containerName="extract-utilities" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.547653 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="21f1c435-27a8-4463-97da-af76d49f0e7a" containerName="extract-utilities" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.547661 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="80fe473c-479a-4083-88ed-ff9ec66558b9" containerName="registry-server" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.547668 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="80fe473c-479a-4083-88ed-ff9ec66558b9" containerName="registry-server" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.547679 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b45380af-d55c-4f77-9385-8218e990c675" containerName="extract-utilities" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.547686 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45380af-d55c-4f77-9385-8218e990c675" containerName="extract-utilities" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.547693 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="21f1c435-27a8-4463-97da-af76d49f0e7a" containerName="extract-content" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.547699 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="21f1c435-27a8-4463-97da-af76d49f0e7a" containerName="extract-content" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.547789 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="21f1c435-27a8-4463-97da-af76d49f0e7a" containerName="registry-server" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.547805 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="6243a4ba-2331-4ff6-8d02-0d7cda7bb73e" containerName="registry-server" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.547815 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="f50117dc-dbba-4bb9-9335-fc47f0b9ad48" containerName="marketplace-operator" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.547824 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="80fe473c-479a-4083-88ed-ff9ec66558b9" containerName="registry-server" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.547832 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="b45380af-d55c-4f77-9385-8218e990c675" containerName="registry-server" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.557785 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q6jch" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.560892 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.563652 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q6jch"] Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.597535 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f18d9937-56b6-4618-98da-04b648e391de-utilities\") pod \"certified-operators-q6jch\" (UID: \"f18d9937-56b6-4618-98da-04b648e391de\") " pod="openshift-marketplace/certified-operators-q6jch" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.597683 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhwmm\" (UniqueName: \"kubernetes.io/projected/f18d9937-56b6-4618-98da-04b648e391de-kube-api-access-zhwmm\") pod \"certified-operators-q6jch\" (UID: \"f18d9937-56b6-4618-98da-04b648e391de\") " pod="openshift-marketplace/certified-operators-q6jch" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.597744 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f18d9937-56b6-4618-98da-04b648e391de-catalog-content\") pod \"certified-operators-q6jch\" (UID: \"f18d9937-56b6-4618-98da-04b648e391de\") " pod="openshift-marketplace/certified-operators-q6jch" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.699381 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f18d9937-56b6-4618-98da-04b648e391de-catalog-content\") pod \"certified-operators-q6jch\" (UID: \"f18d9937-56b6-4618-98da-04b648e391de\") " pod="openshift-marketplace/certified-operators-q6jch" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.699761 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f18d9937-56b6-4618-98da-04b648e391de-utilities\") pod \"certified-operators-q6jch\" (UID: \"f18d9937-56b6-4618-98da-04b648e391de\") " pod="openshift-marketplace/certified-operators-q6jch" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.699832 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zhwmm\" (UniqueName: \"kubernetes.io/projected/f18d9937-56b6-4618-98da-04b648e391de-kube-api-access-zhwmm\") pod \"certified-operators-q6jch\" (UID: \"f18d9937-56b6-4618-98da-04b648e391de\") " pod="openshift-marketplace/certified-operators-q6jch" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.700554 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f18d9937-56b6-4618-98da-04b648e391de-catalog-content\") pod \"certified-operators-q6jch\" (UID: \"f18d9937-56b6-4618-98da-04b648e391de\") " pod="openshift-marketplace/certified-operators-q6jch" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.700595 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f18d9937-56b6-4618-98da-04b648e391de-utilities\") pod \"certified-operators-q6jch\" (UID: \"f18d9937-56b6-4618-98da-04b648e391de\") " pod="openshift-marketplace/certified-operators-q6jch" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.726907 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhwmm\" (UniqueName: \"kubernetes.io/projected/f18d9937-56b6-4618-98da-04b648e391de-kube-api-access-zhwmm\") pod \"certified-operators-q6jch\" (UID: \"f18d9937-56b6-4618-98da-04b648e391de\") " pod="openshift-marketplace/certified-operators-q6jch" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.746653 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-skk5r"] Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.757192 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-skk5r"] Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.757374 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-skk5r" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.759907 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.801536 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a92286b2-fcc7-4fac-bcb9-75abe429385d-utilities\") pod \"redhat-marketplace-skk5r\" (UID: \"a92286b2-fcc7-4fac-bcb9-75abe429385d\") " pod="openshift-marketplace/redhat-marketplace-skk5r" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.801680 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a92286b2-fcc7-4fac-bcb9-75abe429385d-catalog-content\") pod \"redhat-marketplace-skk5r\" (UID: \"a92286b2-fcc7-4fac-bcb9-75abe429385d\") " pod="openshift-marketplace/redhat-marketplace-skk5r" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.801716 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s9fc\" (UniqueName: \"kubernetes.io/projected/a92286b2-fcc7-4fac-bcb9-75abe429385d-kube-api-access-4s9fc\") pod \"redhat-marketplace-skk5r\" (UID: \"a92286b2-fcc7-4fac-bcb9-75abe429385d\") " pod="openshift-marketplace/redhat-marketplace-skk5r" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.826307 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21f1c435-27a8-4463-97da-af76d49f0e7a" path="/var/lib/kubelet/pods/21f1c435-27a8-4463-97da-af76d49f0e7a/volumes" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.827254 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6243a4ba-2331-4ff6-8d02-0d7cda7bb73e" path="/var/lib/kubelet/pods/6243a4ba-2331-4ff6-8d02-0d7cda7bb73e/volumes" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.828080 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80fe473c-479a-4083-88ed-ff9ec66558b9" path="/var/lib/kubelet/pods/80fe473c-479a-4083-88ed-ff9ec66558b9/volumes" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.829273 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b45380af-d55c-4f77-9385-8218e990c675" path="/var/lib/kubelet/pods/b45380af-d55c-4f77-9385-8218e990c675/volumes" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.830103 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f50117dc-dbba-4bb9-9335-fc47f0b9ad48" path="/var/lib/kubelet/pods/f50117dc-dbba-4bb9-9335-fc47f0b9ad48/volumes" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.875622 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q6jch" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.903274 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a92286b2-fcc7-4fac-bcb9-75abe429385d-utilities\") pod \"redhat-marketplace-skk5r\" (UID: \"a92286b2-fcc7-4fac-bcb9-75abe429385d\") " pod="openshift-marketplace/redhat-marketplace-skk5r" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.903891 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a92286b2-fcc7-4fac-bcb9-75abe429385d-utilities\") pod \"redhat-marketplace-skk5r\" (UID: \"a92286b2-fcc7-4fac-bcb9-75abe429385d\") " pod="openshift-marketplace/redhat-marketplace-skk5r" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.904919 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a92286b2-fcc7-4fac-bcb9-75abe429385d-catalog-content\") pod \"redhat-marketplace-skk5r\" (UID: \"a92286b2-fcc7-4fac-bcb9-75abe429385d\") " pod="openshift-marketplace/redhat-marketplace-skk5r" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.905231 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a92286b2-fcc7-4fac-bcb9-75abe429385d-catalog-content\") pod \"redhat-marketplace-skk5r\" (UID: \"a92286b2-fcc7-4fac-bcb9-75abe429385d\") " pod="openshift-marketplace/redhat-marketplace-skk5r" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.905271 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4s9fc\" (UniqueName: \"kubernetes.io/projected/a92286b2-fcc7-4fac-bcb9-75abe429385d-kube-api-access-4s9fc\") pod \"redhat-marketplace-skk5r\" (UID: \"a92286b2-fcc7-4fac-bcb9-75abe429385d\") " pod="openshift-marketplace/redhat-marketplace-skk5r" Dec 09 15:01:48 crc kubenswrapper[5107]: I1209 15:01:48.924708 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s9fc\" (UniqueName: \"kubernetes.io/projected/a92286b2-fcc7-4fac-bcb9-75abe429385d-kube-api-access-4s9fc\") pod \"redhat-marketplace-skk5r\" (UID: \"a92286b2-fcc7-4fac-bcb9-75abe429385d\") " pod="openshift-marketplace/redhat-marketplace-skk5r" Dec 09 15:01:49 crc kubenswrapper[5107]: I1209 15:01:49.071849 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-skk5r" Dec 09 15:01:49 crc kubenswrapper[5107]: I1209 15:01:49.303584 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q6jch"] Dec 09 15:01:49 crc kubenswrapper[5107]: W1209 15:01:49.312020 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf18d9937_56b6_4618_98da_04b648e391de.slice/crio-f6b9cd0b3c8186c5aac2fd55d7a49f70de484ac4baf260fe398020a31f84773b WatchSource:0}: Error finding container f6b9cd0b3c8186c5aac2fd55d7a49f70de484ac4baf260fe398020a31f84773b: Status 404 returned error can't find the container with id f6b9cd0b3c8186c5aac2fd55d7a49f70de484ac4baf260fe398020a31f84773b Dec 09 15:01:49 crc kubenswrapper[5107]: I1209 15:01:49.463142 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-skk5r"] Dec 09 15:01:49 crc kubenswrapper[5107]: W1209 15:01:49.503495 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda92286b2_fcc7_4fac_bcb9_75abe429385d.slice/crio-834af3aa39a91e9db02a20a3febccbeb49bfea31439f1ef696a3de103716f08b WatchSource:0}: Error finding container 834af3aa39a91e9db02a20a3febccbeb49bfea31439f1ef696a3de103716f08b: Status 404 returned error can't find the container with id 834af3aa39a91e9db02a20a3febccbeb49bfea31439f1ef696a3de103716f08b Dec 09 15:01:50 crc kubenswrapper[5107]: I1209 15:01:50.163025 5107 generic.go:358] "Generic (PLEG): container finished" podID="a92286b2-fcc7-4fac-bcb9-75abe429385d" containerID="4cf665a7b35757212f45c01c417e6733d9aa7d61a8dc699c584b06848fe7c53a" exitCode=0 Dec 09 15:01:50 crc kubenswrapper[5107]: I1209 15:01:50.163133 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-skk5r" event={"ID":"a92286b2-fcc7-4fac-bcb9-75abe429385d","Type":"ContainerDied","Data":"4cf665a7b35757212f45c01c417e6733d9aa7d61a8dc699c584b06848fe7c53a"} Dec 09 15:01:50 crc kubenswrapper[5107]: I1209 15:01:50.163164 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-skk5r" event={"ID":"a92286b2-fcc7-4fac-bcb9-75abe429385d","Type":"ContainerStarted","Data":"834af3aa39a91e9db02a20a3febccbeb49bfea31439f1ef696a3de103716f08b"} Dec 09 15:01:50 crc kubenswrapper[5107]: I1209 15:01:50.165959 5107 generic.go:358] "Generic (PLEG): container finished" podID="f18d9937-56b6-4618-98da-04b648e391de" containerID="22c966b2258b0d95023cbdf2b94cd9951ac2f0814a0dedca7c4ff37681463a9f" exitCode=0 Dec 09 15:01:50 crc kubenswrapper[5107]: I1209 15:01:50.166134 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q6jch" event={"ID":"f18d9937-56b6-4618-98da-04b648e391de","Type":"ContainerDied","Data":"22c966b2258b0d95023cbdf2b94cd9951ac2f0814a0dedca7c4ff37681463a9f"} Dec 09 15:01:50 crc kubenswrapper[5107]: I1209 15:01:50.166219 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q6jch" event={"ID":"f18d9937-56b6-4618-98da-04b648e391de","Type":"ContainerStarted","Data":"f6b9cd0b3c8186c5aac2fd55d7a49f70de484ac4baf260fe398020a31f84773b"} Dec 09 15:01:50 crc kubenswrapper[5107]: I1209 15:01:50.942818 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2n5w2"] Dec 09 15:01:50 crc kubenswrapper[5107]: I1209 15:01:50.951157 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2n5w2"] Dec 09 15:01:50 crc kubenswrapper[5107]: I1209 15:01:50.951295 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2n5w2" Dec 09 15:01:50 crc kubenswrapper[5107]: I1209 15:01:50.953549 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 09 15:01:51 crc kubenswrapper[5107]: I1209 15:01:51.039022 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73b61a91-a39d-48e6-ab66-2b34fab3b95b-catalog-content\") pod \"community-operators-2n5w2\" (UID: \"73b61a91-a39d-48e6-ab66-2b34fab3b95b\") " pod="openshift-marketplace/community-operators-2n5w2" Dec 09 15:01:51 crc kubenswrapper[5107]: I1209 15:01:51.039074 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73b61a91-a39d-48e6-ab66-2b34fab3b95b-utilities\") pod \"community-operators-2n5w2\" (UID: \"73b61a91-a39d-48e6-ab66-2b34fab3b95b\") " pod="openshift-marketplace/community-operators-2n5w2" Dec 09 15:01:51 crc kubenswrapper[5107]: I1209 15:01:51.039225 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf5ps\" (UniqueName: \"kubernetes.io/projected/73b61a91-a39d-48e6-ab66-2b34fab3b95b-kube-api-access-gf5ps\") pod \"community-operators-2n5w2\" (UID: \"73b61a91-a39d-48e6-ab66-2b34fab3b95b\") " pod="openshift-marketplace/community-operators-2n5w2" Dec 09 15:01:51 crc kubenswrapper[5107]: I1209 15:01:51.140912 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73b61a91-a39d-48e6-ab66-2b34fab3b95b-catalog-content\") pod \"community-operators-2n5w2\" (UID: \"73b61a91-a39d-48e6-ab66-2b34fab3b95b\") " pod="openshift-marketplace/community-operators-2n5w2" Dec 09 15:01:51 crc kubenswrapper[5107]: I1209 15:01:51.140967 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73b61a91-a39d-48e6-ab66-2b34fab3b95b-utilities\") pod \"community-operators-2n5w2\" (UID: \"73b61a91-a39d-48e6-ab66-2b34fab3b95b\") " pod="openshift-marketplace/community-operators-2n5w2" Dec 09 15:01:51 crc kubenswrapper[5107]: I1209 15:01:51.141026 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gf5ps\" (UniqueName: \"kubernetes.io/projected/73b61a91-a39d-48e6-ab66-2b34fab3b95b-kube-api-access-gf5ps\") pod \"community-operators-2n5w2\" (UID: \"73b61a91-a39d-48e6-ab66-2b34fab3b95b\") " pod="openshift-marketplace/community-operators-2n5w2" Dec 09 15:01:51 crc kubenswrapper[5107]: I1209 15:01:51.141501 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73b61a91-a39d-48e6-ab66-2b34fab3b95b-catalog-content\") pod \"community-operators-2n5w2\" (UID: \"73b61a91-a39d-48e6-ab66-2b34fab3b95b\") " pod="openshift-marketplace/community-operators-2n5w2" Dec 09 15:01:51 crc kubenswrapper[5107]: I1209 15:01:51.141729 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73b61a91-a39d-48e6-ab66-2b34fab3b95b-utilities\") pod \"community-operators-2n5w2\" (UID: \"73b61a91-a39d-48e6-ab66-2b34fab3b95b\") " pod="openshift-marketplace/community-operators-2n5w2" Dec 09 15:01:51 crc kubenswrapper[5107]: I1209 15:01:51.142992 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g6kk5"] Dec 09 15:01:51 crc kubenswrapper[5107]: I1209 15:01:51.150402 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g6kk5" Dec 09 15:01:51 crc kubenswrapper[5107]: I1209 15:01:51.150970 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g6kk5"] Dec 09 15:01:51 crc kubenswrapper[5107]: I1209 15:01:51.153169 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 09 15:01:51 crc kubenswrapper[5107]: I1209 15:01:51.179800 5107 generic.go:358] "Generic (PLEG): container finished" podID="f18d9937-56b6-4618-98da-04b648e391de" containerID="fe09a8a20991ceac9e54e3399c43c62111f183ac6b4bc66f746ad42edd5464b4" exitCode=0 Dec 09 15:01:51 crc kubenswrapper[5107]: I1209 15:01:51.179984 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q6jch" event={"ID":"f18d9937-56b6-4618-98da-04b648e391de","Type":"ContainerDied","Data":"fe09a8a20991ceac9e54e3399c43c62111f183ac6b4bc66f746ad42edd5464b4"} Dec 09 15:01:51 crc kubenswrapper[5107]: I1209 15:01:51.183794 5107 generic.go:358] "Generic (PLEG): container finished" podID="a92286b2-fcc7-4fac-bcb9-75abe429385d" containerID="8299fc93cf6dafe1d85a67163229d6618bc4c11149c19c98373f84b97892d784" exitCode=0 Dec 09 15:01:51 crc kubenswrapper[5107]: I1209 15:01:51.183845 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-skk5r" event={"ID":"a92286b2-fcc7-4fac-bcb9-75abe429385d","Type":"ContainerDied","Data":"8299fc93cf6dafe1d85a67163229d6618bc4c11149c19c98373f84b97892d784"} Dec 09 15:01:51 crc kubenswrapper[5107]: I1209 15:01:51.190169 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf5ps\" (UniqueName: \"kubernetes.io/projected/73b61a91-a39d-48e6-ab66-2b34fab3b95b-kube-api-access-gf5ps\") pod \"community-operators-2n5w2\" (UID: \"73b61a91-a39d-48e6-ab66-2b34fab3b95b\") " pod="openshift-marketplace/community-operators-2n5w2" Dec 09 15:01:51 crc kubenswrapper[5107]: I1209 15:01:51.242857 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fab84895-02ab-43a2-807b-eb140162b480-utilities\") pod \"redhat-operators-g6kk5\" (UID: \"fab84895-02ab-43a2-807b-eb140162b480\") " pod="openshift-marketplace/redhat-operators-g6kk5" Dec 09 15:01:51 crc kubenswrapper[5107]: I1209 15:01:51.243579 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fab84895-02ab-43a2-807b-eb140162b480-catalog-content\") pod \"redhat-operators-g6kk5\" (UID: \"fab84895-02ab-43a2-807b-eb140162b480\") " pod="openshift-marketplace/redhat-operators-g6kk5" Dec 09 15:01:51 crc kubenswrapper[5107]: I1209 15:01:51.243749 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmdr6\" (UniqueName: \"kubernetes.io/projected/fab84895-02ab-43a2-807b-eb140162b480-kube-api-access-kmdr6\") pod \"redhat-operators-g6kk5\" (UID: \"fab84895-02ab-43a2-807b-eb140162b480\") " pod="openshift-marketplace/redhat-operators-g6kk5" Dec 09 15:01:51 crc kubenswrapper[5107]: I1209 15:01:51.344916 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fab84895-02ab-43a2-807b-eb140162b480-catalog-content\") pod \"redhat-operators-g6kk5\" (UID: \"fab84895-02ab-43a2-807b-eb140162b480\") " pod="openshift-marketplace/redhat-operators-g6kk5" Dec 09 15:01:51 crc kubenswrapper[5107]: I1209 15:01:51.345251 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kmdr6\" (UniqueName: \"kubernetes.io/projected/fab84895-02ab-43a2-807b-eb140162b480-kube-api-access-kmdr6\") pod \"redhat-operators-g6kk5\" (UID: \"fab84895-02ab-43a2-807b-eb140162b480\") " pod="openshift-marketplace/redhat-operators-g6kk5" Dec 09 15:01:51 crc kubenswrapper[5107]: I1209 15:01:51.345387 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fab84895-02ab-43a2-807b-eb140162b480-utilities\") pod \"redhat-operators-g6kk5\" (UID: \"fab84895-02ab-43a2-807b-eb140162b480\") " pod="openshift-marketplace/redhat-operators-g6kk5" Dec 09 15:01:51 crc kubenswrapper[5107]: I1209 15:01:51.345987 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fab84895-02ab-43a2-807b-eb140162b480-utilities\") pod \"redhat-operators-g6kk5\" (UID: \"fab84895-02ab-43a2-807b-eb140162b480\") " pod="openshift-marketplace/redhat-operators-g6kk5" Dec 09 15:01:51 crc kubenswrapper[5107]: I1209 15:01:51.351639 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fab84895-02ab-43a2-807b-eb140162b480-catalog-content\") pod \"redhat-operators-g6kk5\" (UID: \"fab84895-02ab-43a2-807b-eb140162b480\") " pod="openshift-marketplace/redhat-operators-g6kk5" Dec 09 15:01:51 crc kubenswrapper[5107]: I1209 15:01:51.369995 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmdr6\" (UniqueName: \"kubernetes.io/projected/fab84895-02ab-43a2-807b-eb140162b480-kube-api-access-kmdr6\") pod \"redhat-operators-g6kk5\" (UID: \"fab84895-02ab-43a2-807b-eb140162b480\") " pod="openshift-marketplace/redhat-operators-g6kk5" Dec 09 15:01:51 crc kubenswrapper[5107]: I1209 15:01:51.370427 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2n5w2" Dec 09 15:01:51 crc kubenswrapper[5107]: I1209 15:01:51.486779 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g6kk5" Dec 09 15:01:51 crc kubenswrapper[5107]: I1209 15:01:51.780176 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2n5w2"] Dec 09 15:01:51 crc kubenswrapper[5107]: W1209 15:01:51.790303 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73b61a91_a39d_48e6_ab66_2b34fab3b95b.slice/crio-9372f99d9abf440d47629253b2a22d62db4096b8d157150906064ed362bd4f6a WatchSource:0}: Error finding container 9372f99d9abf440d47629253b2a22d62db4096b8d157150906064ed362bd4f6a: Status 404 returned error can't find the container with id 9372f99d9abf440d47629253b2a22d62db4096b8d157150906064ed362bd4f6a Dec 09 15:01:51 crc kubenswrapper[5107]: I1209 15:01:51.897957 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g6kk5"] Dec 09 15:01:51 crc kubenswrapper[5107]: W1209 15:01:51.901764 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfab84895_02ab_43a2_807b_eb140162b480.slice/crio-b4303d6e745d612539973bcc0e0a06e7d10b8b5eb95c41f4e3ef19c88ada4e3b WatchSource:0}: Error finding container b4303d6e745d612539973bcc0e0a06e7d10b8b5eb95c41f4e3ef19c88ada4e3b: Status 404 returned error can't find the container with id b4303d6e745d612539973bcc0e0a06e7d10b8b5eb95c41f4e3ef19c88ada4e3b Dec 09 15:01:52 crc kubenswrapper[5107]: I1209 15:01:52.191003 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q6jch" event={"ID":"f18d9937-56b6-4618-98da-04b648e391de","Type":"ContainerStarted","Data":"e0535a4cc4546944440fc38473c80f4311c02bfa6efd70b5dcdb73620e1fda60"} Dec 09 15:01:52 crc kubenswrapper[5107]: I1209 15:01:52.193406 5107 generic.go:358] "Generic (PLEG): container finished" podID="fab84895-02ab-43a2-807b-eb140162b480" containerID="becfb74fae645f6e20c7ea76b439b66f1e3c51b81a93ad84ddbd5d6c7037dcdf" exitCode=0 Dec 09 15:01:52 crc kubenswrapper[5107]: I1209 15:01:52.193471 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g6kk5" event={"ID":"fab84895-02ab-43a2-807b-eb140162b480","Type":"ContainerDied","Data":"becfb74fae645f6e20c7ea76b439b66f1e3c51b81a93ad84ddbd5d6c7037dcdf"} Dec 09 15:01:52 crc kubenswrapper[5107]: I1209 15:01:52.193543 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g6kk5" event={"ID":"fab84895-02ab-43a2-807b-eb140162b480","Type":"ContainerStarted","Data":"b4303d6e745d612539973bcc0e0a06e7d10b8b5eb95c41f4e3ef19c88ada4e3b"} Dec 09 15:01:52 crc kubenswrapper[5107]: I1209 15:01:52.196473 5107 generic.go:358] "Generic (PLEG): container finished" podID="73b61a91-a39d-48e6-ab66-2b34fab3b95b" containerID="40de0faeddb76e3d1b3b382f2275d2f916884d5cf8f0c69be0a12c630c8806a5" exitCode=0 Dec 09 15:01:52 crc kubenswrapper[5107]: I1209 15:01:52.196513 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2n5w2" event={"ID":"73b61a91-a39d-48e6-ab66-2b34fab3b95b","Type":"ContainerDied","Data":"40de0faeddb76e3d1b3b382f2275d2f916884d5cf8f0c69be0a12c630c8806a5"} Dec 09 15:01:52 crc kubenswrapper[5107]: I1209 15:01:52.196550 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2n5w2" event={"ID":"73b61a91-a39d-48e6-ab66-2b34fab3b95b","Type":"ContainerStarted","Data":"9372f99d9abf440d47629253b2a22d62db4096b8d157150906064ed362bd4f6a"} Dec 09 15:01:52 crc kubenswrapper[5107]: I1209 15:01:52.198850 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-skk5r" event={"ID":"a92286b2-fcc7-4fac-bcb9-75abe429385d","Type":"ContainerStarted","Data":"a700dd0f17cb5d3b59087865b5e7639c23f2be44878671c0140509de52acbcc5"} Dec 09 15:01:52 crc kubenswrapper[5107]: I1209 15:01:52.215668 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-q6jch" podStartSLOduration=3.695578188 podStartE2EDuration="4.215648156s" podCreationTimestamp="2025-12-09 15:01:48 +0000 UTC" firstStartedPulling="2025-12-09 15:01:50.167001512 +0000 UTC m=+357.890706441" lastFinishedPulling="2025-12-09 15:01:50.68707152 +0000 UTC m=+358.410776409" observedRunningTime="2025-12-09 15:01:52.214952927 +0000 UTC m=+359.938657816" watchObservedRunningTime="2025-12-09 15:01:52.215648156 +0000 UTC m=+359.939353045" Dec 09 15:01:52 crc kubenswrapper[5107]: I1209 15:01:52.239785 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-skk5r" podStartSLOduration=3.667205041 podStartE2EDuration="4.239758453s" podCreationTimestamp="2025-12-09 15:01:48 +0000 UTC" firstStartedPulling="2025-12-09 15:01:50.164274495 +0000 UTC m=+357.887979414" lastFinishedPulling="2025-12-09 15:01:50.736827937 +0000 UTC m=+358.460532826" observedRunningTime="2025-12-09 15:01:52.235492063 +0000 UTC m=+359.959196952" watchObservedRunningTime="2025-12-09 15:01:52.239758453 +0000 UTC m=+359.963463342" Dec 09 15:01:53 crc kubenswrapper[5107]: I1209 15:01:53.207514 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g6kk5" event={"ID":"fab84895-02ab-43a2-807b-eb140162b480","Type":"ContainerStarted","Data":"f5bb3ea6e0f7e8d2ee571cfbac531ef627bfae54e44cfade214044f759b1ee53"} Dec 09 15:01:53 crc kubenswrapper[5107]: I1209 15:01:53.209758 5107 generic.go:358] "Generic (PLEG): container finished" podID="73b61a91-a39d-48e6-ab66-2b34fab3b95b" containerID="fe39c88ce717b70d4f16f8a89f369e1dfdbb058d26a6ee753da8368f499254b6" exitCode=0 Dec 09 15:01:53 crc kubenswrapper[5107]: I1209 15:01:53.210138 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2n5w2" event={"ID":"73b61a91-a39d-48e6-ab66-2b34fab3b95b","Type":"ContainerDied","Data":"fe39c88ce717b70d4f16f8a89f369e1dfdbb058d26a6ee753da8368f499254b6"} Dec 09 15:01:54 crc kubenswrapper[5107]: I1209 15:01:54.219778 5107 generic.go:358] "Generic (PLEG): container finished" podID="fab84895-02ab-43a2-807b-eb140162b480" containerID="f5bb3ea6e0f7e8d2ee571cfbac531ef627bfae54e44cfade214044f759b1ee53" exitCode=0 Dec 09 15:01:54 crc kubenswrapper[5107]: I1209 15:01:54.219883 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g6kk5" event={"ID":"fab84895-02ab-43a2-807b-eb140162b480","Type":"ContainerDied","Data":"f5bb3ea6e0f7e8d2ee571cfbac531ef627bfae54e44cfade214044f759b1ee53"} Dec 09 15:01:54 crc kubenswrapper[5107]: I1209 15:01:54.222724 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2n5w2" event={"ID":"73b61a91-a39d-48e6-ab66-2b34fab3b95b","Type":"ContainerStarted","Data":"26c92cfb7e051f5c017c3c0f1c840a11601619695a6f29e751333e0b2d50e2e0"} Dec 09 15:01:54 crc kubenswrapper[5107]: I1209 15:01:54.258867 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2n5w2" podStartSLOduration=3.750658752 podStartE2EDuration="4.258843227s" podCreationTimestamp="2025-12-09 15:01:50 +0000 UTC" firstStartedPulling="2025-12-09 15:01:52.198727001 +0000 UTC m=+359.922431890" lastFinishedPulling="2025-12-09 15:01:52.706911476 +0000 UTC m=+360.430616365" observedRunningTime="2025-12-09 15:01:54.256539063 +0000 UTC m=+361.980243972" watchObservedRunningTime="2025-12-09 15:01:54.258843227 +0000 UTC m=+361.982548126" Dec 09 15:01:55 crc kubenswrapper[5107]: I1209 15:01:55.231045 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g6kk5" event={"ID":"fab84895-02ab-43a2-807b-eb140162b480","Type":"ContainerStarted","Data":"e6bd15ee481ae92f78e6c5f450fa42564c1083d589a2f95d423b527fa945442b"} Dec 09 15:01:55 crc kubenswrapper[5107]: I1209 15:01:55.256112 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g6kk5" podStartSLOduration=3.759253984 podStartE2EDuration="4.25608685s" podCreationTimestamp="2025-12-09 15:01:51 +0000 UTC" firstStartedPulling="2025-12-09 15:01:52.194365209 +0000 UTC m=+359.918070098" lastFinishedPulling="2025-12-09 15:01:52.691198075 +0000 UTC m=+360.414902964" observedRunningTime="2025-12-09 15:01:55.250520424 +0000 UTC m=+362.974225343" watchObservedRunningTime="2025-12-09 15:01:55.25608685 +0000 UTC m=+362.979791739" Dec 09 15:01:58 crc kubenswrapper[5107]: I1209 15:01:58.876807 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-q6jch" Dec 09 15:01:58 crc kubenswrapper[5107]: I1209 15:01:58.877404 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-q6jch" Dec 09 15:01:58 crc kubenswrapper[5107]: I1209 15:01:58.913600 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-q6jch" Dec 09 15:01:59 crc kubenswrapper[5107]: I1209 15:01:59.072188 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-skk5r" Dec 09 15:01:59 crc kubenswrapper[5107]: I1209 15:01:59.072490 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-skk5r" Dec 09 15:01:59 crc kubenswrapper[5107]: I1209 15:01:59.109172 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-skk5r" Dec 09 15:01:59 crc kubenswrapper[5107]: I1209 15:01:59.301984 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-q6jch" Dec 09 15:01:59 crc kubenswrapper[5107]: I1209 15:01:59.303018 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-skk5r" Dec 09 15:02:01 crc kubenswrapper[5107]: I1209 15:02:01.371050 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-2n5w2" Dec 09 15:02:01 crc kubenswrapper[5107]: I1209 15:02:01.371365 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2n5w2" Dec 09 15:02:01 crc kubenswrapper[5107]: I1209 15:02:01.409526 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2n5w2" Dec 09 15:02:01 crc kubenswrapper[5107]: I1209 15:02:01.487152 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g6kk5" Dec 09 15:02:01 crc kubenswrapper[5107]: I1209 15:02:01.487248 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-g6kk5" Dec 09 15:02:01 crc kubenswrapper[5107]: I1209 15:02:01.522360 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g6kk5" Dec 09 15:02:02 crc kubenswrapper[5107]: I1209 15:02:02.333694 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g6kk5" Dec 09 15:02:02 crc kubenswrapper[5107]: I1209 15:02:02.341896 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2n5w2" Dec 09 15:02:44 crc kubenswrapper[5107]: I1209 15:02:44.154602 5107 patch_prober.go:28] interesting pod/machine-config-daemon-9jq8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:02:44 crc kubenswrapper[5107]: I1209 15:02:44.155279 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" podUID="902902bc-6dc6-4c5f-8e1b-9399b7c813c7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:03:14 crc kubenswrapper[5107]: I1209 15:03:14.154867 5107 patch_prober.go:28] interesting pod/machine-config-daemon-9jq8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:03:14 crc kubenswrapper[5107]: I1209 15:03:14.155529 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" podUID="902902bc-6dc6-4c5f-8e1b-9399b7c813c7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:03:44 crc kubenswrapper[5107]: I1209 15:03:44.154051 5107 patch_prober.go:28] interesting pod/machine-config-daemon-9jq8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:03:44 crc kubenswrapper[5107]: I1209 15:03:44.154709 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" podUID="902902bc-6dc6-4c5f-8e1b-9399b7c813c7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:03:44 crc kubenswrapper[5107]: I1209 15:03:44.154776 5107 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" Dec 09 15:03:44 crc kubenswrapper[5107]: I1209 15:03:44.155845 5107 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"34519aada4fd9eb2bcaeeb1db4e761259a091e00fc9ecec917ee75d1c4f3c68a"} pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 15:03:44 crc kubenswrapper[5107]: I1209 15:03:44.155951 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" podUID="902902bc-6dc6-4c5f-8e1b-9399b7c813c7" containerName="machine-config-daemon" containerID="cri-o://34519aada4fd9eb2bcaeeb1db4e761259a091e00fc9ecec917ee75d1c4f3c68a" gracePeriod=600 Dec 09 15:03:44 crc kubenswrapper[5107]: I1209 15:03:44.983638 5107 generic.go:358] "Generic (PLEG): container finished" podID="902902bc-6dc6-4c5f-8e1b-9399b7c813c7" containerID="34519aada4fd9eb2bcaeeb1db4e761259a091e00fc9ecec917ee75d1c4f3c68a" exitCode=0 Dec 09 15:03:44 crc kubenswrapper[5107]: I1209 15:03:44.983745 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" event={"ID":"902902bc-6dc6-4c5f-8e1b-9399b7c813c7","Type":"ContainerDied","Data":"34519aada4fd9eb2bcaeeb1db4e761259a091e00fc9ecec917ee75d1c4f3c68a"} Dec 09 15:03:44 crc kubenswrapper[5107]: I1209 15:03:44.984001 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" event={"ID":"902902bc-6dc6-4c5f-8e1b-9399b7c813c7","Type":"ContainerStarted","Data":"f703877f2270bc9edf98fb7d33fe97e11bfba514d68412fac7acd3c7d8621675"} Dec 09 15:03:44 crc kubenswrapper[5107]: I1209 15:03:44.984030 5107 scope.go:117] "RemoveContainer" containerID="c9ff12bb0bcdabee200b9ad2d18d95ed57d2bf2ef0990fb07cb22ecab1d5e617" Dec 09 15:05:44 crc kubenswrapper[5107]: I1209 15:05:44.154513 5107 patch_prober.go:28] interesting pod/machine-config-daemon-9jq8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:05:44 crc kubenswrapper[5107]: I1209 15:05:44.155507 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" podUID="902902bc-6dc6-4c5f-8e1b-9399b7c813c7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:05:53 crc kubenswrapper[5107]: I1209 15:05:53.108576 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 09 15:05:53 crc kubenswrapper[5107]: I1209 15:05:53.110211 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 09 15:06:14 crc kubenswrapper[5107]: I1209 15:06:14.154687 5107 patch_prober.go:28] interesting pod/machine-config-daemon-9jq8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:06:14 crc kubenswrapper[5107]: I1209 15:06:14.155289 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" podUID="902902bc-6dc6-4c5f-8e1b-9399b7c813c7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:06:17 crc kubenswrapper[5107]: I1209 15:06:17.891720 5107 ???:1] "http: TLS handshake error from 192.168.126.11:54504: no serving certificate available for the kubelet" Dec 09 15:06:44 crc kubenswrapper[5107]: I1209 15:06:44.154323 5107 patch_prober.go:28] interesting pod/machine-config-daemon-9jq8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:06:44 crc kubenswrapper[5107]: I1209 15:06:44.155006 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" podUID="902902bc-6dc6-4c5f-8e1b-9399b7c813c7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:06:44 crc kubenswrapper[5107]: I1209 15:06:44.155064 5107 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" Dec 09 15:06:44 crc kubenswrapper[5107]: I1209 15:06:44.155735 5107 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f703877f2270bc9edf98fb7d33fe97e11bfba514d68412fac7acd3c7d8621675"} pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 15:06:44 crc kubenswrapper[5107]: I1209 15:06:44.155803 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" podUID="902902bc-6dc6-4c5f-8e1b-9399b7c813c7" containerName="machine-config-daemon" containerID="cri-o://f703877f2270bc9edf98fb7d33fe97e11bfba514d68412fac7acd3c7d8621675" gracePeriod=600 Dec 09 15:06:44 crc kubenswrapper[5107]: I1209 15:06:44.286003 5107 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 15:06:45 crc kubenswrapper[5107]: I1209 15:06:45.047907 5107 generic.go:358] "Generic (PLEG): container finished" podID="902902bc-6dc6-4c5f-8e1b-9399b7c813c7" containerID="f703877f2270bc9edf98fb7d33fe97e11bfba514d68412fac7acd3c7d8621675" exitCode=0 Dec 09 15:06:45 crc kubenswrapper[5107]: I1209 15:06:45.047976 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" event={"ID":"902902bc-6dc6-4c5f-8e1b-9399b7c813c7","Type":"ContainerDied","Data":"f703877f2270bc9edf98fb7d33fe97e11bfba514d68412fac7acd3c7d8621675"} Dec 09 15:06:45 crc kubenswrapper[5107]: I1209 15:06:45.048375 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" event={"ID":"902902bc-6dc6-4c5f-8e1b-9399b7c813c7","Type":"ContainerStarted","Data":"be297312e5ca9ef320955bd5db7e8e291d1e5bad441d948b03d47094da2e57b8"} Dec 09 15:06:45 crc kubenswrapper[5107]: I1209 15:06:45.048395 5107 scope.go:117] "RemoveContainer" containerID="34519aada4fd9eb2bcaeeb1db4e761259a091e00fc9ecec917ee75d1c4f3c68a" Dec 09 15:06:49 crc kubenswrapper[5107]: I1209 15:06:49.989919 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-6zphj"] Dec 09 15:06:49 crc kubenswrapper[5107]: I1209 15:06:49.990570 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-6zphj" podUID="035458af-eba0-4241-bcac-4e11d6358b21" containerName="kube-rbac-proxy" containerID="cri-o://d4a3745133e65e2be2bd2cbac84be5bd1275f003ea6f2c4be8b1ccb707efe066" gracePeriod=30 Dec 09 15:06:49 crc kubenswrapper[5107]: I1209 15:06:49.990715 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-6zphj" podUID="035458af-eba0-4241-bcac-4e11d6358b21" containerName="ovnkube-cluster-manager" containerID="cri-o://8eb899657bdc52cb8444c544352a0c306b439d5fe4c54705d994a6ac368a93e8" gracePeriod=30 Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.192852 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-6zphj" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.218211 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-wkjjc"] Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.218776 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="035458af-eba0-4241-bcac-4e11d6358b21" containerName="kube-rbac-proxy" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.218796 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="035458af-eba0-4241-bcac-4e11d6358b21" containerName="kube-rbac-proxy" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.218834 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="035458af-eba0-4241-bcac-4e11d6358b21" containerName="ovnkube-cluster-manager" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.218841 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="035458af-eba0-4241-bcac-4e11d6358b21" containerName="ovnkube-cluster-manager" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.218915 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="035458af-eba0-4241-bcac-4e11d6358b21" containerName="ovnkube-cluster-manager" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.218925 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="035458af-eba0-4241-bcac-4e11d6358b21" containerName="kube-rbac-proxy" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.222400 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-wkjjc" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.226684 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9rjcr"] Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.227128 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" podUID="b75d4675-9c37-47cf-8fa3-11097aa379ca" containerName="ovn-controller" containerID="cri-o://2b0bb45e7cfd1d29d0c683f330b9ec1bc1b9e55ea43414f5d3547a56c18f5be2" gracePeriod=30 Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.227296 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" podUID="b75d4675-9c37-47cf-8fa3-11097aa379ca" containerName="sbdb" containerID="cri-o://8471ce2736ffcdf0a726fb416f7052bea6d7a64059254fadad2f2c5d7a22a6bf" gracePeriod=30 Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.227355 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" podUID="b75d4675-9c37-47cf-8fa3-11097aa379ca" containerName="nbdb" containerID="cri-o://30fb1cef95981187208b2322962356f128220ccbce6500a3985bbe99ab43c3d9" gracePeriod=30 Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.227386 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" podUID="b75d4675-9c37-47cf-8fa3-11097aa379ca" containerName="northd" containerID="cri-o://9d9242463b71252c18cb08500d43ca4b4e22fd2eb01c223088f8237f55e91be4" gracePeriod=30 Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.227417 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" podUID="b75d4675-9c37-47cf-8fa3-11097aa379ca" containerName="ovn-acl-logging" containerID="cri-o://f03256890d3adb9b2fac5c7bdc862ddf606134e007253842b1682a19e541658a" gracePeriod=30 Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.227434 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" podUID="b75d4675-9c37-47cf-8fa3-11097aa379ca" containerName="kube-rbac-proxy-node" containerID="cri-o://25564c5cf5484e1a41682c97f3809d0f43a3b4f650c2de6755b1f61cf496646b" gracePeriod=30 Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.227458 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" podUID="b75d4675-9c37-47cf-8fa3-11097aa379ca" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://8c18c6bed23088ffc76fd043c2c7d5d9712210c38611642bd7f7ebc5810ab221" gracePeriod=30 Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.265248 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbbvk\" (UniqueName: \"kubernetes.io/projected/035458af-eba0-4241-bcac-4e11d6358b21-kube-api-access-jbbvk\") pod \"035458af-eba0-4241-bcac-4e11d6358b21\" (UID: \"035458af-eba0-4241-bcac-4e11d6358b21\") " Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.265395 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/035458af-eba0-4241-bcac-4e11d6358b21-ovn-control-plane-metrics-cert\") pod \"035458af-eba0-4241-bcac-4e11d6358b21\" (UID: \"035458af-eba0-4241-bcac-4e11d6358b21\") " Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.265469 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/035458af-eba0-4241-bcac-4e11d6358b21-env-overrides\") pod \"035458af-eba0-4241-bcac-4e11d6358b21\" (UID: \"035458af-eba0-4241-bcac-4e11d6358b21\") " Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.265518 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/035458af-eba0-4241-bcac-4e11d6358b21-ovnkube-config\") pod \"035458af-eba0-4241-bcac-4e11d6358b21\" (UID: \"035458af-eba0-4241-bcac-4e11d6358b21\") " Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.266130 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/035458af-eba0-4241-bcac-4e11d6358b21-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "035458af-eba0-4241-bcac-4e11d6358b21" (UID: "035458af-eba0-4241-bcac-4e11d6358b21"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.266147 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/035458af-eba0-4241-bcac-4e11d6358b21-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "035458af-eba0-4241-bcac-4e11d6358b21" (UID: "035458af-eba0-4241-bcac-4e11d6358b21"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.266471 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" podUID="b75d4675-9c37-47cf-8fa3-11097aa379ca" containerName="ovnkube-controller" containerID="cri-o://0910dc324553e84e9dcd17c953a31b1e5247d9a03b9f3acc69353f0a8162c2c3" gracePeriod=30 Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.266640 5107 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/035458af-eba0-4241-bcac-4e11d6358b21-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.266661 5107 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/035458af-eba0-4241-bcac-4e11d6358b21-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.276450 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/035458af-eba0-4241-bcac-4e11d6358b21-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "035458af-eba0-4241-bcac-4e11d6358b21" (UID: "035458af-eba0-4241-bcac-4e11d6358b21"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.277608 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/035458af-eba0-4241-bcac-4e11d6358b21-kube-api-access-jbbvk" (OuterVolumeSpecName: "kube-api-access-jbbvk") pod "035458af-eba0-4241-bcac-4e11d6358b21" (UID: "035458af-eba0-4241-bcac-4e11d6358b21"). InnerVolumeSpecName "kube-api-access-jbbvk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.368739 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b553041b-fd27-4e75-8e82-35c45820589a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-wkjjc\" (UID: \"b553041b-fd27-4e75-8e82-35c45820589a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-wkjjc" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.369035 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b553041b-fd27-4e75-8e82-35c45820589a-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-wkjjc\" (UID: \"b553041b-fd27-4e75-8e82-35c45820589a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-wkjjc" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.369182 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b553041b-fd27-4e75-8e82-35c45820589a-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-wkjjc\" (UID: \"b553041b-fd27-4e75-8e82-35c45820589a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-wkjjc" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.369314 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxvh8\" (UniqueName: \"kubernetes.io/projected/b553041b-fd27-4e75-8e82-35c45820589a-kube-api-access-nxvh8\") pod \"ovnkube-control-plane-97c9b6c48-wkjjc\" (UID: \"b553041b-fd27-4e75-8e82-35c45820589a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-wkjjc" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.369534 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jbbvk\" (UniqueName: \"kubernetes.io/projected/035458af-eba0-4241-bcac-4e11d6358b21-kube-api-access-jbbvk\") on node \"crc\" DevicePath \"\"" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.369603 5107 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/035458af-eba0-4241-bcac-4e11d6358b21-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.470867 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b553041b-fd27-4e75-8e82-35c45820589a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-wkjjc\" (UID: \"b553041b-fd27-4e75-8e82-35c45820589a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-wkjjc" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.470925 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b553041b-fd27-4e75-8e82-35c45820589a-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-wkjjc\" (UID: \"b553041b-fd27-4e75-8e82-35c45820589a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-wkjjc" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.470951 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b553041b-fd27-4e75-8e82-35c45820589a-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-wkjjc\" (UID: \"b553041b-fd27-4e75-8e82-35c45820589a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-wkjjc" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.470973 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nxvh8\" (UniqueName: \"kubernetes.io/projected/b553041b-fd27-4e75-8e82-35c45820589a-kube-api-access-nxvh8\") pod \"ovnkube-control-plane-97c9b6c48-wkjjc\" (UID: \"b553041b-fd27-4e75-8e82-35c45820589a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-wkjjc" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.471974 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b553041b-fd27-4e75-8e82-35c45820589a-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-wkjjc\" (UID: \"b553041b-fd27-4e75-8e82-35c45820589a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-wkjjc" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.472205 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b553041b-fd27-4e75-8e82-35c45820589a-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-wkjjc\" (UID: \"b553041b-fd27-4e75-8e82-35c45820589a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-wkjjc" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.474723 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b553041b-fd27-4e75-8e82-35c45820589a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-wkjjc\" (UID: \"b553041b-fd27-4e75-8e82-35c45820589a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-wkjjc" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.483734 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9rjcr_b75d4675-9c37-47cf-8fa3-11097aa379ca/ovn-acl-logging/0.log" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.485717 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9rjcr_b75d4675-9c37-47cf-8fa3-11097aa379ca/ovn-controller/0.log" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.486205 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.491982 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxvh8\" (UniqueName: \"kubernetes.io/projected/b553041b-fd27-4e75-8e82-35c45820589a-kube-api-access-nxvh8\") pod \"ovnkube-control-plane-97c9b6c48-wkjjc\" (UID: \"b553041b-fd27-4e75-8e82-35c45820589a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-wkjjc" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.540704 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vsnlz"] Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.541366 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b75d4675-9c37-47cf-8fa3-11097aa379ca" containerName="kubecfg-setup" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.541388 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="b75d4675-9c37-47cf-8fa3-11097aa379ca" containerName="kubecfg-setup" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.541416 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b75d4675-9c37-47cf-8fa3-11097aa379ca" containerName="nbdb" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.541427 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="b75d4675-9c37-47cf-8fa3-11097aa379ca" containerName="nbdb" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.541436 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b75d4675-9c37-47cf-8fa3-11097aa379ca" containerName="ovn-controller" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.541447 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="b75d4675-9c37-47cf-8fa3-11097aa379ca" containerName="ovn-controller" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.541456 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b75d4675-9c37-47cf-8fa3-11097aa379ca" containerName="sbdb" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.541464 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="b75d4675-9c37-47cf-8fa3-11097aa379ca" containerName="sbdb" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.541486 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b75d4675-9c37-47cf-8fa3-11097aa379ca" containerName="kube-rbac-proxy-ovn-metrics" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.541495 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="b75d4675-9c37-47cf-8fa3-11097aa379ca" containerName="kube-rbac-proxy-ovn-metrics" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.541508 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b75d4675-9c37-47cf-8fa3-11097aa379ca" containerName="northd" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.541515 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="b75d4675-9c37-47cf-8fa3-11097aa379ca" containerName="northd" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.541533 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b75d4675-9c37-47cf-8fa3-11097aa379ca" containerName="ovn-acl-logging" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.541541 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="b75d4675-9c37-47cf-8fa3-11097aa379ca" containerName="ovn-acl-logging" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.541557 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b75d4675-9c37-47cf-8fa3-11097aa379ca" containerName="ovnkube-controller" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.541565 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="b75d4675-9c37-47cf-8fa3-11097aa379ca" containerName="ovnkube-controller" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.541579 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b75d4675-9c37-47cf-8fa3-11097aa379ca" containerName="kube-rbac-proxy-node" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.541587 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="b75d4675-9c37-47cf-8fa3-11097aa379ca" containerName="kube-rbac-proxy-node" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.541690 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="b75d4675-9c37-47cf-8fa3-11097aa379ca" containerName="kube-rbac-proxy-ovn-metrics" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.541702 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="b75d4675-9c37-47cf-8fa3-11097aa379ca" containerName="ovn-controller" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.541711 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="b75d4675-9c37-47cf-8fa3-11097aa379ca" containerName="kube-rbac-proxy-node" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.541720 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="b75d4675-9c37-47cf-8fa3-11097aa379ca" containerName="sbdb" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.541733 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="b75d4675-9c37-47cf-8fa3-11097aa379ca" containerName="northd" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.541747 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="b75d4675-9c37-47cf-8fa3-11097aa379ca" containerName="nbdb" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.541758 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="b75d4675-9c37-47cf-8fa3-11097aa379ca" containerName="ovnkube-controller" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.541769 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="b75d4675-9c37-47cf-8fa3-11097aa379ca" containerName="ovn-acl-logging" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.549684 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.553619 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-wkjjc" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.572192 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b75d4675-9c37-47cf-8fa3-11097aa379ca-ovnkube-config\") pod \"b75d4675-9c37-47cf-8fa3-11097aa379ca\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.572466 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-systemd-units\") pod \"b75d4675-9c37-47cf-8fa3-11097aa379ca\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.572607 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljp8p\" (UniqueName: \"kubernetes.io/projected/b75d4675-9c37-47cf-8fa3-11097aa379ca-kube-api-access-ljp8p\") pod \"b75d4675-9c37-47cf-8fa3-11097aa379ca\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.572716 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-host-cni-bin\") pod \"b75d4675-9c37-47cf-8fa3-11097aa379ca\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.572792 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-host-run-ovn-kubernetes\") pod \"b75d4675-9c37-47cf-8fa3-11097aa379ca\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.573435 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-host-kubelet\") pod \"b75d4675-9c37-47cf-8fa3-11097aa379ca\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.573525 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-node-log\") pod \"b75d4675-9c37-47cf-8fa3-11097aa379ca\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.573601 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-etc-openvswitch\") pod \"b75d4675-9c37-47cf-8fa3-11097aa379ca\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.573667 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-run-openvswitch\") pod \"b75d4675-9c37-47cf-8fa3-11097aa379ca\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.573738 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-var-lib-openvswitch\") pod \"b75d4675-9c37-47cf-8fa3-11097aa379ca\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.573805 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b75d4675-9c37-47cf-8fa3-11097aa379ca-ovn-node-metrics-cert\") pod \"b75d4675-9c37-47cf-8fa3-11097aa379ca\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.573879 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-run-systemd\") pod \"b75d4675-9c37-47cf-8fa3-11097aa379ca\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.573989 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b75d4675-9c37-47cf-8fa3-11097aa379ca-env-overrides\") pod \"b75d4675-9c37-47cf-8fa3-11097aa379ca\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.574081 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-host-slash\") pod \"b75d4675-9c37-47cf-8fa3-11097aa379ca\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.574138 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-run-ovn\") pod \"b75d4675-9c37-47cf-8fa3-11097aa379ca\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.574242 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b75d4675-9c37-47cf-8fa3-11097aa379ca-ovnkube-script-lib\") pod \"b75d4675-9c37-47cf-8fa3-11097aa379ca\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.574352 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-log-socket\") pod \"b75d4675-9c37-47cf-8fa3-11097aa379ca\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.574428 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-host-run-netns\") pod \"b75d4675-9c37-47cf-8fa3-11097aa379ca\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.574515 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-host-var-lib-cni-networks-ovn-kubernetes\") pod \"b75d4675-9c37-47cf-8fa3-11097aa379ca\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.574621 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-host-cni-netd\") pod \"b75d4675-9c37-47cf-8fa3-11097aa379ca\" (UID: \"b75d4675-9c37-47cf-8fa3-11097aa379ca\") " Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.573191 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "b75d4675-9c37-47cf-8fa3-11097aa379ca" (UID: "b75d4675-9c37-47cf-8fa3-11097aa379ca"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.573282 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "b75d4675-9c37-47cf-8fa3-11097aa379ca" (UID: "b75d4675-9c37-47cf-8fa3-11097aa379ca"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.573286 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "b75d4675-9c37-47cf-8fa3-11097aa379ca" (UID: "b75d4675-9c37-47cf-8fa3-11097aa379ca"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.573479 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "b75d4675-9c37-47cf-8fa3-11097aa379ca" (UID: "b75d4675-9c37-47cf-8fa3-11097aa379ca"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.573878 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b75d4675-9c37-47cf-8fa3-11097aa379ca-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "b75d4675-9c37-47cf-8fa3-11097aa379ca" (UID: "b75d4675-9c37-47cf-8fa3-11097aa379ca"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.573918 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "b75d4675-9c37-47cf-8fa3-11097aa379ca" (UID: "b75d4675-9c37-47cf-8fa3-11097aa379ca"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.573939 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-node-log" (OuterVolumeSpecName: "node-log") pod "b75d4675-9c37-47cf-8fa3-11097aa379ca" (UID: "b75d4675-9c37-47cf-8fa3-11097aa379ca"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.573965 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "b75d4675-9c37-47cf-8fa3-11097aa379ca" (UID: "b75d4675-9c37-47cf-8fa3-11097aa379ca"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.573986 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "b75d4675-9c37-47cf-8fa3-11097aa379ca" (UID: "b75d4675-9c37-47cf-8fa3-11097aa379ca"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.575022 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "b75d4675-9c37-47cf-8fa3-11097aa379ca" (UID: "b75d4675-9c37-47cf-8fa3-11097aa379ca"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.575031 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "b75d4675-9c37-47cf-8fa3-11097aa379ca" (UID: "b75d4675-9c37-47cf-8fa3-11097aa379ca"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.575135 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-host-slash" (OuterVolumeSpecName: "host-slash") pod "b75d4675-9c37-47cf-8fa3-11097aa379ca" (UID: "b75d4675-9c37-47cf-8fa3-11097aa379ca"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.575156 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "b75d4675-9c37-47cf-8fa3-11097aa379ca" (UID: "b75d4675-9c37-47cf-8fa3-11097aa379ca"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.575178 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "b75d4675-9c37-47cf-8fa3-11097aa379ca" (UID: "b75d4675-9c37-47cf-8fa3-11097aa379ca"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.575201 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-log-socket" (OuterVolumeSpecName: "log-socket") pod "b75d4675-9c37-47cf-8fa3-11097aa379ca" (UID: "b75d4675-9c37-47cf-8fa3-11097aa379ca"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.575561 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b75d4675-9c37-47cf-8fa3-11097aa379ca-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "b75d4675-9c37-47cf-8fa3-11097aa379ca" (UID: "b75d4675-9c37-47cf-8fa3-11097aa379ca"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.575973 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b75d4675-9c37-47cf-8fa3-11097aa379ca-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "b75d4675-9c37-47cf-8fa3-11097aa379ca" (UID: "b75d4675-9c37-47cf-8fa3-11097aa379ca"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.578558 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b75d4675-9c37-47cf-8fa3-11097aa379ca-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "b75d4675-9c37-47cf-8fa3-11097aa379ca" (UID: "b75d4675-9c37-47cf-8fa3-11097aa379ca"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.578684 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b75d4675-9c37-47cf-8fa3-11097aa379ca-kube-api-access-ljp8p" (OuterVolumeSpecName: "kube-api-access-ljp8p") pod "b75d4675-9c37-47cf-8fa3-11097aa379ca" (UID: "b75d4675-9c37-47cf-8fa3-11097aa379ca"). InnerVolumeSpecName "kube-api-access-ljp8p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 15:06:50 crc kubenswrapper[5107]: W1209 15:06:50.579682 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb553041b_fd27_4e75_8e82_35c45820589a.slice/crio-f9b91f21cfb9bafa6721fdca6bf590deb7469b5dea309513cafc3dd7c0129584 WatchSource:0}: Error finding container f9b91f21cfb9bafa6721fdca6bf590deb7469b5dea309513cafc3dd7c0129584: Status 404 returned error can't find the container with id f9b91f21cfb9bafa6721fdca6bf590deb7469b5dea309513cafc3dd7c0129584 Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.585936 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "b75d4675-9c37-47cf-8fa3-11097aa379ca" (UID: "b75d4675-9c37-47cf-8fa3-11097aa379ca"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.675985 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/44841c2b-1739-439c-ad17-a65d8c3d1a6f-host-kubelet\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.676036 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/44841c2b-1739-439c-ad17-a65d8c3d1a6f-run-systemd\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.676080 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/44841c2b-1739-439c-ad17-a65d8c3d1a6f-env-overrides\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.676098 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44841c2b-1739-439c-ad17-a65d8c3d1a6f-etc-openvswitch\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.676113 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/44841c2b-1739-439c-ad17-a65d8c3d1a6f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.676180 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/44841c2b-1739-439c-ad17-a65d8c3d1a6f-systemd-units\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.676220 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/44841c2b-1739-439c-ad17-a65d8c3d1a6f-host-cni-bin\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.676235 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/44841c2b-1739-439c-ad17-a65d8c3d1a6f-ovnkube-config\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.676254 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44841c2b-1739-439c-ad17-a65d8c3d1a6f-host-cni-netd\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.676271 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44841c2b-1739-439c-ad17-a65d8c3d1a6f-run-openvswitch\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.676358 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/44841c2b-1739-439c-ad17-a65d8c3d1a6f-host-run-ovn-kubernetes\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.676428 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44841c2b-1739-439c-ad17-a65d8c3d1a6f-var-lib-openvswitch\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.676462 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/44841c2b-1739-439c-ad17-a65d8c3d1a6f-host-run-netns\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.676510 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/44841c2b-1739-439c-ad17-a65d8c3d1a6f-host-slash\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.676539 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/44841c2b-1739-439c-ad17-a65d8c3d1a6f-node-log\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.676561 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/44841c2b-1739-439c-ad17-a65d8c3d1a6f-log-socket\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.676602 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdppn\" (UniqueName: \"kubernetes.io/projected/44841c2b-1739-439c-ad17-a65d8c3d1a6f-kube-api-access-zdppn\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.676628 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/44841c2b-1739-439c-ad17-a65d8c3d1a6f-run-ovn\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.676643 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/44841c2b-1739-439c-ad17-a65d8c3d1a6f-ovnkube-script-lib\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.676712 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/44841c2b-1739-439c-ad17-a65d8c3d1a6f-ovn-node-metrics-cert\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.676768 5107 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-node-log\") on node \"crc\" DevicePath \"\"" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.676789 5107 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.676801 5107 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.676812 5107 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.676830 5107 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b75d4675-9c37-47cf-8fa3-11097aa379ca-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.676843 5107 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.676853 5107 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b75d4675-9c37-47cf-8fa3-11097aa379ca-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.676864 5107 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-host-slash\") on node \"crc\" DevicePath \"\"" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.676874 5107 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.676884 5107 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b75d4675-9c37-47cf-8fa3-11097aa379ca-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.676895 5107 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-log-socket\") on node \"crc\" DevicePath \"\"" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.676906 5107 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.676917 5107 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.676928 5107 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.676938 5107 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b75d4675-9c37-47cf-8fa3-11097aa379ca-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.676948 5107 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.676970 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ljp8p\" (UniqueName: \"kubernetes.io/projected/b75d4675-9c37-47cf-8fa3-11097aa379ca-kube-api-access-ljp8p\") on node \"crc\" DevicePath \"\"" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.676981 5107 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.676991 5107 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.677003 5107 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b75d4675-9c37-47cf-8fa3-11097aa379ca-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.778584 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/44841c2b-1739-439c-ad17-a65d8c3d1a6f-host-cni-bin\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.778631 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/44841c2b-1739-439c-ad17-a65d8c3d1a6f-ovnkube-config\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.778653 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44841c2b-1739-439c-ad17-a65d8c3d1a6f-host-cni-netd\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.778668 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44841c2b-1739-439c-ad17-a65d8c3d1a6f-run-openvswitch\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.778684 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/44841c2b-1739-439c-ad17-a65d8c3d1a6f-host-run-ovn-kubernetes\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.778695 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/44841c2b-1739-439c-ad17-a65d8c3d1a6f-host-cni-bin\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.778703 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44841c2b-1739-439c-ad17-a65d8c3d1a6f-var-lib-openvswitch\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.778744 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44841c2b-1739-439c-ad17-a65d8c3d1a6f-run-openvswitch\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.778747 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44841c2b-1739-439c-ad17-a65d8c3d1a6f-host-cni-netd\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.778781 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44841c2b-1739-439c-ad17-a65d8c3d1a6f-var-lib-openvswitch\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.778786 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/44841c2b-1739-439c-ad17-a65d8c3d1a6f-host-run-ovn-kubernetes\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.778923 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/44841c2b-1739-439c-ad17-a65d8c3d1a6f-host-run-netns\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.778954 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/44841c2b-1739-439c-ad17-a65d8c3d1a6f-host-slash\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.778974 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/44841c2b-1739-439c-ad17-a65d8c3d1a6f-node-log\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.778973 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/44841c2b-1739-439c-ad17-a65d8c3d1a6f-host-run-netns\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.778989 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/44841c2b-1739-439c-ad17-a65d8c3d1a6f-log-socket\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.779009 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zdppn\" (UniqueName: \"kubernetes.io/projected/44841c2b-1739-439c-ad17-a65d8c3d1a6f-kube-api-access-zdppn\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.779018 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/44841c2b-1739-439c-ad17-a65d8c3d1a6f-host-slash\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.779038 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/44841c2b-1739-439c-ad17-a65d8c3d1a6f-node-log\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.779049 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/44841c2b-1739-439c-ad17-a65d8c3d1a6f-log-socket\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.779069 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/44841c2b-1739-439c-ad17-a65d8c3d1a6f-run-ovn\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.779088 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/44841c2b-1739-439c-ad17-a65d8c3d1a6f-ovnkube-script-lib\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.779118 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/44841c2b-1739-439c-ad17-a65d8c3d1a6f-ovn-node-metrics-cert\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.779131 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/44841c2b-1739-439c-ad17-a65d8c3d1a6f-run-ovn\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.779141 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/44841c2b-1739-439c-ad17-a65d8c3d1a6f-host-kubelet\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.779164 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/44841c2b-1739-439c-ad17-a65d8c3d1a6f-run-systemd\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.779183 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/44841c2b-1739-439c-ad17-a65d8c3d1a6f-env-overrides\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.779206 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44841c2b-1739-439c-ad17-a65d8c3d1a6f-etc-openvswitch\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.779226 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/44841c2b-1739-439c-ad17-a65d8c3d1a6f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.779253 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/44841c2b-1739-439c-ad17-a65d8c3d1a6f-systemd-units\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.779304 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/44841c2b-1739-439c-ad17-a65d8c3d1a6f-systemd-units\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.779326 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/44841c2b-1739-439c-ad17-a65d8c3d1a6f-host-kubelet\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.779361 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/44841c2b-1739-439c-ad17-a65d8c3d1a6f-run-systemd\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.779577 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44841c2b-1739-439c-ad17-a65d8c3d1a6f-etc-openvswitch\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.779648 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/44841c2b-1739-439c-ad17-a65d8c3d1a6f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.779740 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/44841c2b-1739-439c-ad17-a65d8c3d1a6f-ovnkube-config\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.779763 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/44841c2b-1739-439c-ad17-a65d8c3d1a6f-ovnkube-script-lib\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.779808 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/44841c2b-1739-439c-ad17-a65d8c3d1a6f-env-overrides\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.787410 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/44841c2b-1739-439c-ad17-a65d8c3d1a6f-ovn-node-metrics-cert\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.799664 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdppn\" (UniqueName: \"kubernetes.io/projected/44841c2b-1739-439c-ad17-a65d8c3d1a6f-kube-api-access-zdppn\") pod \"ovnkube-node-vsnlz\" (UID: \"44841c2b-1739-439c-ad17-a65d8c3d1a6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:50 crc kubenswrapper[5107]: I1209 15:06:50.873388 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.083481 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-wkjjc" event={"ID":"b553041b-fd27-4e75-8e82-35c45820589a","Type":"ContainerStarted","Data":"a2be28002cbfc1e4a2315adb9471fef935c4f3191e1aac1ae90e33b23f89cabf"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.083532 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-wkjjc" event={"ID":"b553041b-fd27-4e75-8e82-35c45820589a","Type":"ContainerStarted","Data":"5420c17066ef81e8787c4c9594e36e5b53829b093017acb34c874fca73e28dbd"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.083542 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-wkjjc" event={"ID":"b553041b-fd27-4e75-8e82-35c45820589a","Type":"ContainerStarted","Data":"f9b91f21cfb9bafa6721fdca6bf590deb7469b5dea309513cafc3dd7c0129584"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.088005 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9rjcr_b75d4675-9c37-47cf-8fa3-11097aa379ca/ovn-acl-logging/0.log" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.088586 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9rjcr_b75d4675-9c37-47cf-8fa3-11097aa379ca/ovn-controller/0.log" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.088916 5107 generic.go:358] "Generic (PLEG): container finished" podID="b75d4675-9c37-47cf-8fa3-11097aa379ca" containerID="0910dc324553e84e9dcd17c953a31b1e5247d9a03b9f3acc69353f0a8162c2c3" exitCode=0 Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.089008 5107 generic.go:358] "Generic (PLEG): container finished" podID="b75d4675-9c37-47cf-8fa3-11097aa379ca" containerID="8471ce2736ffcdf0a726fb416f7052bea6d7a64059254fadad2f2c5d7a22a6bf" exitCode=0 Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.089069 5107 generic.go:358] "Generic (PLEG): container finished" podID="b75d4675-9c37-47cf-8fa3-11097aa379ca" containerID="30fb1cef95981187208b2322962356f128220ccbce6500a3985bbe99ab43c3d9" exitCode=0 Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.089113 5107 generic.go:358] "Generic (PLEG): container finished" podID="b75d4675-9c37-47cf-8fa3-11097aa379ca" containerID="9d9242463b71252c18cb08500d43ca4b4e22fd2eb01c223088f8237f55e91be4" exitCode=0 Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.089171 5107 generic.go:358] "Generic (PLEG): container finished" podID="b75d4675-9c37-47cf-8fa3-11097aa379ca" containerID="8c18c6bed23088ffc76fd043c2c7d5d9712210c38611642bd7f7ebc5810ab221" exitCode=0 Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.089253 5107 generic.go:358] "Generic (PLEG): container finished" podID="b75d4675-9c37-47cf-8fa3-11097aa379ca" containerID="25564c5cf5484e1a41682c97f3809d0f43a3b4f650c2de6755b1f61cf496646b" exitCode=0 Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.089313 5107 generic.go:358] "Generic (PLEG): container finished" podID="b75d4675-9c37-47cf-8fa3-11097aa379ca" containerID="f03256890d3adb9b2fac5c7bdc862ddf606134e007253842b1682a19e541658a" exitCode=143 Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.089416 5107 generic.go:358] "Generic (PLEG): container finished" podID="b75d4675-9c37-47cf-8fa3-11097aa379ca" containerID="2b0bb45e7cfd1d29d0c683f330b9ec1bc1b9e55ea43414f5d3547a56c18f5be2" exitCode=143 Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.089534 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" event={"ID":"b75d4675-9c37-47cf-8fa3-11097aa379ca","Type":"ContainerDied","Data":"0910dc324553e84e9dcd17c953a31b1e5247d9a03b9f3acc69353f0a8162c2c3"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.089616 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" event={"ID":"b75d4675-9c37-47cf-8fa3-11097aa379ca","Type":"ContainerDied","Data":"8471ce2736ffcdf0a726fb416f7052bea6d7a64059254fadad2f2c5d7a22a6bf"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.089681 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" event={"ID":"b75d4675-9c37-47cf-8fa3-11097aa379ca","Type":"ContainerDied","Data":"30fb1cef95981187208b2322962356f128220ccbce6500a3985bbe99ab43c3d9"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.089743 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" event={"ID":"b75d4675-9c37-47cf-8fa3-11097aa379ca","Type":"ContainerDied","Data":"9d9242463b71252c18cb08500d43ca4b4e22fd2eb01c223088f8237f55e91be4"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.089820 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" event={"ID":"b75d4675-9c37-47cf-8fa3-11097aa379ca","Type":"ContainerDied","Data":"8c18c6bed23088ffc76fd043c2c7d5d9712210c38611642bd7f7ebc5810ab221"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.089874 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" event={"ID":"b75d4675-9c37-47cf-8fa3-11097aa379ca","Type":"ContainerDied","Data":"25564c5cf5484e1a41682c97f3809d0f43a3b4f650c2de6755b1f61cf496646b"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.089941 5107 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f03256890d3adb9b2fac5c7bdc862ddf606134e007253842b1682a19e541658a"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.090019 5107 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2b0bb45e7cfd1d29d0c683f330b9ec1bc1b9e55ea43414f5d3547a56c18f5be2"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.090072 5107 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84b6d2404a4128c8b4f10eaac65fd21b1f2f5d59cb8517a27e2dc41709a2e709"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.090139 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" event={"ID":"b75d4675-9c37-47cf-8fa3-11097aa379ca","Type":"ContainerDied","Data":"f03256890d3adb9b2fac5c7bdc862ddf606134e007253842b1682a19e541658a"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.090206 5107 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0910dc324553e84e9dcd17c953a31b1e5247d9a03b9f3acc69353f0a8162c2c3"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.090282 5107 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8471ce2736ffcdf0a726fb416f7052bea6d7a64059254fadad2f2c5d7a22a6bf"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.090366 5107 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"30fb1cef95981187208b2322962356f128220ccbce6500a3985bbe99ab43c3d9"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.090424 5107 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9d9242463b71252c18cb08500d43ca4b4e22fd2eb01c223088f8237f55e91be4"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.090477 5107 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8c18c6bed23088ffc76fd043c2c7d5d9712210c38611642bd7f7ebc5810ab221"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.090526 5107 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"25564c5cf5484e1a41682c97f3809d0f43a3b4f650c2de6755b1f61cf496646b"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.090572 5107 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f03256890d3adb9b2fac5c7bdc862ddf606134e007253842b1682a19e541658a"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.090617 5107 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2b0bb45e7cfd1d29d0c683f330b9ec1bc1b9e55ea43414f5d3547a56c18f5be2"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.090659 5107 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84b6d2404a4128c8b4f10eaac65fd21b1f2f5d59cb8517a27e2dc41709a2e709"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.090704 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" event={"ID":"b75d4675-9c37-47cf-8fa3-11097aa379ca","Type":"ContainerDied","Data":"2b0bb45e7cfd1d29d0c683f330b9ec1bc1b9e55ea43414f5d3547a56c18f5be2"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.090759 5107 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0910dc324553e84e9dcd17c953a31b1e5247d9a03b9f3acc69353f0a8162c2c3"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.090808 5107 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8471ce2736ffcdf0a726fb416f7052bea6d7a64059254fadad2f2c5d7a22a6bf"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.090850 5107 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"30fb1cef95981187208b2322962356f128220ccbce6500a3985bbe99ab43c3d9"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.090896 5107 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9d9242463b71252c18cb08500d43ca4b4e22fd2eb01c223088f8237f55e91be4"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.090937 5107 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8c18c6bed23088ffc76fd043c2c7d5d9712210c38611642bd7f7ebc5810ab221"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.091006 5107 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"25564c5cf5484e1a41682c97f3809d0f43a3b4f650c2de6755b1f61cf496646b"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.091054 5107 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f03256890d3adb9b2fac5c7bdc862ddf606134e007253842b1682a19e541658a"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.091099 5107 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2b0bb45e7cfd1d29d0c683f330b9ec1bc1b9e55ea43414f5d3547a56c18f5be2"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.091140 5107 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84b6d2404a4128c8b4f10eaac65fd21b1f2f5d59cb8517a27e2dc41709a2e709"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.091189 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" event={"ID":"b75d4675-9c37-47cf-8fa3-11097aa379ca","Type":"ContainerDied","Data":"79f1c4a7c26eac86b0cbccd2041c843533457d514a69a9bd632969d3b1532e69"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.091243 5107 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0910dc324553e84e9dcd17c953a31b1e5247d9a03b9f3acc69353f0a8162c2c3"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.091289 5107 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8471ce2736ffcdf0a726fb416f7052bea6d7a64059254fadad2f2c5d7a22a6bf"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.091349 5107 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"30fb1cef95981187208b2322962356f128220ccbce6500a3985bbe99ab43c3d9"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.091395 5107 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9d9242463b71252c18cb08500d43ca4b4e22fd2eb01c223088f8237f55e91be4"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.091443 5107 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8c18c6bed23088ffc76fd043c2c7d5d9712210c38611642bd7f7ebc5810ab221"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.091490 5107 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"25564c5cf5484e1a41682c97f3809d0f43a3b4f650c2de6755b1f61cf496646b"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.091537 5107 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f03256890d3adb9b2fac5c7bdc862ddf606134e007253842b1682a19e541658a"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.091586 5107 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2b0bb45e7cfd1d29d0c683f330b9ec1bc1b9e55ea43414f5d3547a56c18f5be2"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.091633 5107 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84b6d2404a4128c8b4f10eaac65fd21b1f2f5d59cb8517a27e2dc41709a2e709"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.091690 5107 scope.go:117] "RemoveContainer" containerID="0910dc324553e84e9dcd17c953a31b1e5247d9a03b9f3acc69353f0a8162c2c3" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.091921 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9rjcr" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.146880 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g7sv4_357946f5-b5ee-4739-a2c3-62beb5aedb57/kube-multus/0.log" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.147084 5107 generic.go:358] "Generic (PLEG): container finished" podID="357946f5-b5ee-4739-a2c3-62beb5aedb57" containerID="c064ed809838e4dedd5ad60bcbd76d399a059f3dc74e14be811ff1103fc3bf38" exitCode=2 Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.147220 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g7sv4" event={"ID":"357946f5-b5ee-4739-a2c3-62beb5aedb57","Type":"ContainerDied","Data":"c064ed809838e4dedd5ad60bcbd76d399a059f3dc74e14be811ff1103fc3bf38"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.147789 5107 scope.go:117] "RemoveContainer" containerID="c064ed809838e4dedd5ad60bcbd76d399a059f3dc74e14be811ff1103fc3bf38" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.153244 5107 generic.go:358] "Generic (PLEG): container finished" podID="035458af-eba0-4241-bcac-4e11d6358b21" containerID="8eb899657bdc52cb8444c544352a0c306b439d5fe4c54705d994a6ac368a93e8" exitCode=0 Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.153264 5107 generic.go:358] "Generic (PLEG): container finished" podID="035458af-eba0-4241-bcac-4e11d6358b21" containerID="d4a3745133e65e2be2bd2cbac84be5bd1275f003ea6f2c4be8b1ccb707efe066" exitCode=0 Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.153401 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-6zphj" event={"ID":"035458af-eba0-4241-bcac-4e11d6358b21","Type":"ContainerDied","Data":"8eb899657bdc52cb8444c544352a0c306b439d5fe4c54705d994a6ac368a93e8"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.153421 5107 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8eb899657bdc52cb8444c544352a0c306b439d5fe4c54705d994a6ac368a93e8"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.153429 5107 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d4a3745133e65e2be2bd2cbac84be5bd1275f003ea6f2c4be8b1ccb707efe066"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.153440 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-6zphj" event={"ID":"035458af-eba0-4241-bcac-4e11d6358b21","Type":"ContainerDied","Data":"d4a3745133e65e2be2bd2cbac84be5bd1275f003ea6f2c4be8b1ccb707efe066"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.153471 5107 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8eb899657bdc52cb8444c544352a0c306b439d5fe4c54705d994a6ac368a93e8"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.153479 5107 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d4a3745133e65e2be2bd2cbac84be5bd1275f003ea6f2c4be8b1ccb707efe066"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.153488 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-6zphj" event={"ID":"035458af-eba0-4241-bcac-4e11d6358b21","Type":"ContainerDied","Data":"a820694d9153c2a954e23485a43029cd4958b9989b94aad2b90bac7eb0e544e7"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.153496 5107 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8eb899657bdc52cb8444c544352a0c306b439d5fe4c54705d994a6ac368a93e8"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.153503 5107 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d4a3745133e65e2be2bd2cbac84be5bd1275f003ea6f2c4be8b1ccb707efe066"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.153535 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-6zphj" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.156206 5107 generic.go:358] "Generic (PLEG): container finished" podID="44841c2b-1739-439c-ad17-a65d8c3d1a6f" containerID="10e4aadf6b93b839ce6d44de6c37375d9edb15423382ee82aed38f2e383e742c" exitCode=0 Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.156270 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" event={"ID":"44841c2b-1739-439c-ad17-a65d8c3d1a6f","Type":"ContainerDied","Data":"10e4aadf6b93b839ce6d44de6c37375d9edb15423382ee82aed38f2e383e742c"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.156295 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" event={"ID":"44841c2b-1739-439c-ad17-a65d8c3d1a6f","Type":"ContainerStarted","Data":"9a25288282cf4875572129845e4764f61bb849852a2cfd89920faae0db793167"} Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.161924 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-wkjjc" podStartSLOduration=1.161905779 podStartE2EDuration="1.161905779s" podCreationTimestamp="2025-12-09 15:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 15:06:51.161797946 +0000 UTC m=+658.885502835" watchObservedRunningTime="2025-12-09 15:06:51.161905779 +0000 UTC m=+658.885610668" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.203609 5107 scope.go:117] "RemoveContainer" containerID="8471ce2736ffcdf0a726fb416f7052bea6d7a64059254fadad2f2c5d7a22a6bf" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.236403 5107 scope.go:117] "RemoveContainer" containerID="30fb1cef95981187208b2322962356f128220ccbce6500a3985bbe99ab43c3d9" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.249740 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9rjcr"] Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.253653 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9rjcr"] Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.260175 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-6zphj"] Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.262996 5107 scope.go:117] "RemoveContainer" containerID="9d9242463b71252c18cb08500d43ca4b4e22fd2eb01c223088f8237f55e91be4" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.264878 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-6zphj"] Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.285623 5107 scope.go:117] "RemoveContainer" containerID="8c18c6bed23088ffc76fd043c2c7d5d9712210c38611642bd7f7ebc5810ab221" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.301491 5107 scope.go:117] "RemoveContainer" containerID="25564c5cf5484e1a41682c97f3809d0f43a3b4f650c2de6755b1f61cf496646b" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.316531 5107 scope.go:117] "RemoveContainer" containerID="f03256890d3adb9b2fac5c7bdc862ddf606134e007253842b1682a19e541658a" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.335619 5107 scope.go:117] "RemoveContainer" containerID="2b0bb45e7cfd1d29d0c683f330b9ec1bc1b9e55ea43414f5d3547a56c18f5be2" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.355121 5107 scope.go:117] "RemoveContainer" containerID="84b6d2404a4128c8b4f10eaac65fd21b1f2f5d59cb8517a27e2dc41709a2e709" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.372234 5107 scope.go:117] "RemoveContainer" containerID="0910dc324553e84e9dcd17c953a31b1e5247d9a03b9f3acc69353f0a8162c2c3" Dec 09 15:06:51 crc kubenswrapper[5107]: E1209 15:06:51.380465 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0910dc324553e84e9dcd17c953a31b1e5247d9a03b9f3acc69353f0a8162c2c3\": container with ID starting with 0910dc324553e84e9dcd17c953a31b1e5247d9a03b9f3acc69353f0a8162c2c3 not found: ID does not exist" containerID="0910dc324553e84e9dcd17c953a31b1e5247d9a03b9f3acc69353f0a8162c2c3" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.380526 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0910dc324553e84e9dcd17c953a31b1e5247d9a03b9f3acc69353f0a8162c2c3"} err="failed to get container status \"0910dc324553e84e9dcd17c953a31b1e5247d9a03b9f3acc69353f0a8162c2c3\": rpc error: code = NotFound desc = could not find container \"0910dc324553e84e9dcd17c953a31b1e5247d9a03b9f3acc69353f0a8162c2c3\": container with ID starting with 0910dc324553e84e9dcd17c953a31b1e5247d9a03b9f3acc69353f0a8162c2c3 not found: ID does not exist" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.380558 5107 scope.go:117] "RemoveContainer" containerID="8471ce2736ffcdf0a726fb416f7052bea6d7a64059254fadad2f2c5d7a22a6bf" Dec 09 15:06:51 crc kubenswrapper[5107]: E1209 15:06:51.382832 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8471ce2736ffcdf0a726fb416f7052bea6d7a64059254fadad2f2c5d7a22a6bf\": container with ID starting with 8471ce2736ffcdf0a726fb416f7052bea6d7a64059254fadad2f2c5d7a22a6bf not found: ID does not exist" containerID="8471ce2736ffcdf0a726fb416f7052bea6d7a64059254fadad2f2c5d7a22a6bf" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.382875 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8471ce2736ffcdf0a726fb416f7052bea6d7a64059254fadad2f2c5d7a22a6bf"} err="failed to get container status \"8471ce2736ffcdf0a726fb416f7052bea6d7a64059254fadad2f2c5d7a22a6bf\": rpc error: code = NotFound desc = could not find container \"8471ce2736ffcdf0a726fb416f7052bea6d7a64059254fadad2f2c5d7a22a6bf\": container with ID starting with 8471ce2736ffcdf0a726fb416f7052bea6d7a64059254fadad2f2c5d7a22a6bf not found: ID does not exist" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.382902 5107 scope.go:117] "RemoveContainer" containerID="30fb1cef95981187208b2322962356f128220ccbce6500a3985bbe99ab43c3d9" Dec 09 15:06:51 crc kubenswrapper[5107]: E1209 15:06:51.385992 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30fb1cef95981187208b2322962356f128220ccbce6500a3985bbe99ab43c3d9\": container with ID starting with 30fb1cef95981187208b2322962356f128220ccbce6500a3985bbe99ab43c3d9 not found: ID does not exist" containerID="30fb1cef95981187208b2322962356f128220ccbce6500a3985bbe99ab43c3d9" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.386034 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30fb1cef95981187208b2322962356f128220ccbce6500a3985bbe99ab43c3d9"} err="failed to get container status \"30fb1cef95981187208b2322962356f128220ccbce6500a3985bbe99ab43c3d9\": rpc error: code = NotFound desc = could not find container \"30fb1cef95981187208b2322962356f128220ccbce6500a3985bbe99ab43c3d9\": container with ID starting with 30fb1cef95981187208b2322962356f128220ccbce6500a3985bbe99ab43c3d9 not found: ID does not exist" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.386061 5107 scope.go:117] "RemoveContainer" containerID="9d9242463b71252c18cb08500d43ca4b4e22fd2eb01c223088f8237f55e91be4" Dec 09 15:06:51 crc kubenswrapper[5107]: E1209 15:06:51.387160 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d9242463b71252c18cb08500d43ca4b4e22fd2eb01c223088f8237f55e91be4\": container with ID starting with 9d9242463b71252c18cb08500d43ca4b4e22fd2eb01c223088f8237f55e91be4 not found: ID does not exist" containerID="9d9242463b71252c18cb08500d43ca4b4e22fd2eb01c223088f8237f55e91be4" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.387208 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d9242463b71252c18cb08500d43ca4b4e22fd2eb01c223088f8237f55e91be4"} err="failed to get container status \"9d9242463b71252c18cb08500d43ca4b4e22fd2eb01c223088f8237f55e91be4\": rpc error: code = NotFound desc = could not find container \"9d9242463b71252c18cb08500d43ca4b4e22fd2eb01c223088f8237f55e91be4\": container with ID starting with 9d9242463b71252c18cb08500d43ca4b4e22fd2eb01c223088f8237f55e91be4 not found: ID does not exist" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.387232 5107 scope.go:117] "RemoveContainer" containerID="8c18c6bed23088ffc76fd043c2c7d5d9712210c38611642bd7f7ebc5810ab221" Dec 09 15:06:51 crc kubenswrapper[5107]: E1209 15:06:51.387776 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c18c6bed23088ffc76fd043c2c7d5d9712210c38611642bd7f7ebc5810ab221\": container with ID starting with 8c18c6bed23088ffc76fd043c2c7d5d9712210c38611642bd7f7ebc5810ab221 not found: ID does not exist" containerID="8c18c6bed23088ffc76fd043c2c7d5d9712210c38611642bd7f7ebc5810ab221" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.387803 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c18c6bed23088ffc76fd043c2c7d5d9712210c38611642bd7f7ebc5810ab221"} err="failed to get container status \"8c18c6bed23088ffc76fd043c2c7d5d9712210c38611642bd7f7ebc5810ab221\": rpc error: code = NotFound desc = could not find container \"8c18c6bed23088ffc76fd043c2c7d5d9712210c38611642bd7f7ebc5810ab221\": container with ID starting with 8c18c6bed23088ffc76fd043c2c7d5d9712210c38611642bd7f7ebc5810ab221 not found: ID does not exist" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.387819 5107 scope.go:117] "RemoveContainer" containerID="25564c5cf5484e1a41682c97f3809d0f43a3b4f650c2de6755b1f61cf496646b" Dec 09 15:06:51 crc kubenswrapper[5107]: E1209 15:06:51.388088 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25564c5cf5484e1a41682c97f3809d0f43a3b4f650c2de6755b1f61cf496646b\": container with ID starting with 25564c5cf5484e1a41682c97f3809d0f43a3b4f650c2de6755b1f61cf496646b not found: ID does not exist" containerID="25564c5cf5484e1a41682c97f3809d0f43a3b4f650c2de6755b1f61cf496646b" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.388175 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25564c5cf5484e1a41682c97f3809d0f43a3b4f650c2de6755b1f61cf496646b"} err="failed to get container status \"25564c5cf5484e1a41682c97f3809d0f43a3b4f650c2de6755b1f61cf496646b\": rpc error: code = NotFound desc = could not find container \"25564c5cf5484e1a41682c97f3809d0f43a3b4f650c2de6755b1f61cf496646b\": container with ID starting with 25564c5cf5484e1a41682c97f3809d0f43a3b4f650c2de6755b1f61cf496646b not found: ID does not exist" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.388252 5107 scope.go:117] "RemoveContainer" containerID="f03256890d3adb9b2fac5c7bdc862ddf606134e007253842b1682a19e541658a" Dec 09 15:06:51 crc kubenswrapper[5107]: E1209 15:06:51.388632 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f03256890d3adb9b2fac5c7bdc862ddf606134e007253842b1682a19e541658a\": container with ID starting with f03256890d3adb9b2fac5c7bdc862ddf606134e007253842b1682a19e541658a not found: ID does not exist" containerID="f03256890d3adb9b2fac5c7bdc862ddf606134e007253842b1682a19e541658a" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.388658 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f03256890d3adb9b2fac5c7bdc862ddf606134e007253842b1682a19e541658a"} err="failed to get container status \"f03256890d3adb9b2fac5c7bdc862ddf606134e007253842b1682a19e541658a\": rpc error: code = NotFound desc = could not find container \"f03256890d3adb9b2fac5c7bdc862ddf606134e007253842b1682a19e541658a\": container with ID starting with f03256890d3adb9b2fac5c7bdc862ddf606134e007253842b1682a19e541658a not found: ID does not exist" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.388673 5107 scope.go:117] "RemoveContainer" containerID="2b0bb45e7cfd1d29d0c683f330b9ec1bc1b9e55ea43414f5d3547a56c18f5be2" Dec 09 15:06:51 crc kubenswrapper[5107]: E1209 15:06:51.388874 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b0bb45e7cfd1d29d0c683f330b9ec1bc1b9e55ea43414f5d3547a56c18f5be2\": container with ID starting with 2b0bb45e7cfd1d29d0c683f330b9ec1bc1b9e55ea43414f5d3547a56c18f5be2 not found: ID does not exist" containerID="2b0bb45e7cfd1d29d0c683f330b9ec1bc1b9e55ea43414f5d3547a56c18f5be2" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.388954 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b0bb45e7cfd1d29d0c683f330b9ec1bc1b9e55ea43414f5d3547a56c18f5be2"} err="failed to get container status \"2b0bb45e7cfd1d29d0c683f330b9ec1bc1b9e55ea43414f5d3547a56c18f5be2\": rpc error: code = NotFound desc = could not find container \"2b0bb45e7cfd1d29d0c683f330b9ec1bc1b9e55ea43414f5d3547a56c18f5be2\": container with ID starting with 2b0bb45e7cfd1d29d0c683f330b9ec1bc1b9e55ea43414f5d3547a56c18f5be2 not found: ID does not exist" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.389026 5107 scope.go:117] "RemoveContainer" containerID="84b6d2404a4128c8b4f10eaac65fd21b1f2f5d59cb8517a27e2dc41709a2e709" Dec 09 15:06:51 crc kubenswrapper[5107]: E1209 15:06:51.389476 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84b6d2404a4128c8b4f10eaac65fd21b1f2f5d59cb8517a27e2dc41709a2e709\": container with ID starting with 84b6d2404a4128c8b4f10eaac65fd21b1f2f5d59cb8517a27e2dc41709a2e709 not found: ID does not exist" containerID="84b6d2404a4128c8b4f10eaac65fd21b1f2f5d59cb8517a27e2dc41709a2e709" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.389509 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84b6d2404a4128c8b4f10eaac65fd21b1f2f5d59cb8517a27e2dc41709a2e709"} err="failed to get container status \"84b6d2404a4128c8b4f10eaac65fd21b1f2f5d59cb8517a27e2dc41709a2e709\": rpc error: code = NotFound desc = could not find container \"84b6d2404a4128c8b4f10eaac65fd21b1f2f5d59cb8517a27e2dc41709a2e709\": container with ID starting with 84b6d2404a4128c8b4f10eaac65fd21b1f2f5d59cb8517a27e2dc41709a2e709 not found: ID does not exist" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.389528 5107 scope.go:117] "RemoveContainer" containerID="0910dc324553e84e9dcd17c953a31b1e5247d9a03b9f3acc69353f0a8162c2c3" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.389786 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0910dc324553e84e9dcd17c953a31b1e5247d9a03b9f3acc69353f0a8162c2c3"} err="failed to get container status \"0910dc324553e84e9dcd17c953a31b1e5247d9a03b9f3acc69353f0a8162c2c3\": rpc error: code = NotFound desc = could not find container \"0910dc324553e84e9dcd17c953a31b1e5247d9a03b9f3acc69353f0a8162c2c3\": container with ID starting with 0910dc324553e84e9dcd17c953a31b1e5247d9a03b9f3acc69353f0a8162c2c3 not found: ID does not exist" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.389894 5107 scope.go:117] "RemoveContainer" containerID="8471ce2736ffcdf0a726fb416f7052bea6d7a64059254fadad2f2c5d7a22a6bf" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.390213 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8471ce2736ffcdf0a726fb416f7052bea6d7a64059254fadad2f2c5d7a22a6bf"} err="failed to get container status \"8471ce2736ffcdf0a726fb416f7052bea6d7a64059254fadad2f2c5d7a22a6bf\": rpc error: code = NotFound desc = could not find container \"8471ce2736ffcdf0a726fb416f7052bea6d7a64059254fadad2f2c5d7a22a6bf\": container with ID starting with 8471ce2736ffcdf0a726fb416f7052bea6d7a64059254fadad2f2c5d7a22a6bf not found: ID does not exist" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.390238 5107 scope.go:117] "RemoveContainer" containerID="30fb1cef95981187208b2322962356f128220ccbce6500a3985bbe99ab43c3d9" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.390474 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30fb1cef95981187208b2322962356f128220ccbce6500a3985bbe99ab43c3d9"} err="failed to get container status \"30fb1cef95981187208b2322962356f128220ccbce6500a3985bbe99ab43c3d9\": rpc error: code = NotFound desc = could not find container \"30fb1cef95981187208b2322962356f128220ccbce6500a3985bbe99ab43c3d9\": container with ID starting with 30fb1cef95981187208b2322962356f128220ccbce6500a3985bbe99ab43c3d9 not found: ID does not exist" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.390556 5107 scope.go:117] "RemoveContainer" containerID="9d9242463b71252c18cb08500d43ca4b4e22fd2eb01c223088f8237f55e91be4" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.390849 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d9242463b71252c18cb08500d43ca4b4e22fd2eb01c223088f8237f55e91be4"} err="failed to get container status \"9d9242463b71252c18cb08500d43ca4b4e22fd2eb01c223088f8237f55e91be4\": rpc error: code = NotFound desc = could not find container \"9d9242463b71252c18cb08500d43ca4b4e22fd2eb01c223088f8237f55e91be4\": container with ID starting with 9d9242463b71252c18cb08500d43ca4b4e22fd2eb01c223088f8237f55e91be4 not found: ID does not exist" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.390922 5107 scope.go:117] "RemoveContainer" containerID="8c18c6bed23088ffc76fd043c2c7d5d9712210c38611642bd7f7ebc5810ab221" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.391207 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c18c6bed23088ffc76fd043c2c7d5d9712210c38611642bd7f7ebc5810ab221"} err="failed to get container status \"8c18c6bed23088ffc76fd043c2c7d5d9712210c38611642bd7f7ebc5810ab221\": rpc error: code = NotFound desc = could not find container \"8c18c6bed23088ffc76fd043c2c7d5d9712210c38611642bd7f7ebc5810ab221\": container with ID starting with 8c18c6bed23088ffc76fd043c2c7d5d9712210c38611642bd7f7ebc5810ab221 not found: ID does not exist" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.391310 5107 scope.go:117] "RemoveContainer" containerID="25564c5cf5484e1a41682c97f3809d0f43a3b4f650c2de6755b1f61cf496646b" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.391813 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25564c5cf5484e1a41682c97f3809d0f43a3b4f650c2de6755b1f61cf496646b"} err="failed to get container status \"25564c5cf5484e1a41682c97f3809d0f43a3b4f650c2de6755b1f61cf496646b\": rpc error: code = NotFound desc = could not find container \"25564c5cf5484e1a41682c97f3809d0f43a3b4f650c2de6755b1f61cf496646b\": container with ID starting with 25564c5cf5484e1a41682c97f3809d0f43a3b4f650c2de6755b1f61cf496646b not found: ID does not exist" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.391841 5107 scope.go:117] "RemoveContainer" containerID="f03256890d3adb9b2fac5c7bdc862ddf606134e007253842b1682a19e541658a" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.392142 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f03256890d3adb9b2fac5c7bdc862ddf606134e007253842b1682a19e541658a"} err="failed to get container status \"f03256890d3adb9b2fac5c7bdc862ddf606134e007253842b1682a19e541658a\": rpc error: code = NotFound desc = could not find container \"f03256890d3adb9b2fac5c7bdc862ddf606134e007253842b1682a19e541658a\": container with ID starting with f03256890d3adb9b2fac5c7bdc862ddf606134e007253842b1682a19e541658a not found: ID does not exist" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.392171 5107 scope.go:117] "RemoveContainer" containerID="2b0bb45e7cfd1d29d0c683f330b9ec1bc1b9e55ea43414f5d3547a56c18f5be2" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.392639 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b0bb45e7cfd1d29d0c683f330b9ec1bc1b9e55ea43414f5d3547a56c18f5be2"} err="failed to get container status \"2b0bb45e7cfd1d29d0c683f330b9ec1bc1b9e55ea43414f5d3547a56c18f5be2\": rpc error: code = NotFound desc = could not find container \"2b0bb45e7cfd1d29d0c683f330b9ec1bc1b9e55ea43414f5d3547a56c18f5be2\": container with ID starting with 2b0bb45e7cfd1d29d0c683f330b9ec1bc1b9e55ea43414f5d3547a56c18f5be2 not found: ID does not exist" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.392659 5107 scope.go:117] "RemoveContainer" containerID="84b6d2404a4128c8b4f10eaac65fd21b1f2f5d59cb8517a27e2dc41709a2e709" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.392886 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84b6d2404a4128c8b4f10eaac65fd21b1f2f5d59cb8517a27e2dc41709a2e709"} err="failed to get container status \"84b6d2404a4128c8b4f10eaac65fd21b1f2f5d59cb8517a27e2dc41709a2e709\": rpc error: code = NotFound desc = could not find container \"84b6d2404a4128c8b4f10eaac65fd21b1f2f5d59cb8517a27e2dc41709a2e709\": container with ID starting with 84b6d2404a4128c8b4f10eaac65fd21b1f2f5d59cb8517a27e2dc41709a2e709 not found: ID does not exist" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.392964 5107 scope.go:117] "RemoveContainer" containerID="0910dc324553e84e9dcd17c953a31b1e5247d9a03b9f3acc69353f0a8162c2c3" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.393452 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0910dc324553e84e9dcd17c953a31b1e5247d9a03b9f3acc69353f0a8162c2c3"} err="failed to get container status \"0910dc324553e84e9dcd17c953a31b1e5247d9a03b9f3acc69353f0a8162c2c3\": rpc error: code = NotFound desc = could not find container \"0910dc324553e84e9dcd17c953a31b1e5247d9a03b9f3acc69353f0a8162c2c3\": container with ID starting with 0910dc324553e84e9dcd17c953a31b1e5247d9a03b9f3acc69353f0a8162c2c3 not found: ID does not exist" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.393477 5107 scope.go:117] "RemoveContainer" containerID="8471ce2736ffcdf0a726fb416f7052bea6d7a64059254fadad2f2c5d7a22a6bf" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.394673 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8471ce2736ffcdf0a726fb416f7052bea6d7a64059254fadad2f2c5d7a22a6bf"} err="failed to get container status \"8471ce2736ffcdf0a726fb416f7052bea6d7a64059254fadad2f2c5d7a22a6bf\": rpc error: code = NotFound desc = could not find container \"8471ce2736ffcdf0a726fb416f7052bea6d7a64059254fadad2f2c5d7a22a6bf\": container with ID starting with 8471ce2736ffcdf0a726fb416f7052bea6d7a64059254fadad2f2c5d7a22a6bf not found: ID does not exist" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.394762 5107 scope.go:117] "RemoveContainer" containerID="30fb1cef95981187208b2322962356f128220ccbce6500a3985bbe99ab43c3d9" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.395033 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30fb1cef95981187208b2322962356f128220ccbce6500a3985bbe99ab43c3d9"} err="failed to get container status \"30fb1cef95981187208b2322962356f128220ccbce6500a3985bbe99ab43c3d9\": rpc error: code = NotFound desc = could not find container \"30fb1cef95981187208b2322962356f128220ccbce6500a3985bbe99ab43c3d9\": container with ID starting with 30fb1cef95981187208b2322962356f128220ccbce6500a3985bbe99ab43c3d9 not found: ID does not exist" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.395064 5107 scope.go:117] "RemoveContainer" containerID="9d9242463b71252c18cb08500d43ca4b4e22fd2eb01c223088f8237f55e91be4" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.395290 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d9242463b71252c18cb08500d43ca4b4e22fd2eb01c223088f8237f55e91be4"} err="failed to get container status \"9d9242463b71252c18cb08500d43ca4b4e22fd2eb01c223088f8237f55e91be4\": rpc error: code = NotFound desc = could not find container \"9d9242463b71252c18cb08500d43ca4b4e22fd2eb01c223088f8237f55e91be4\": container with ID starting with 9d9242463b71252c18cb08500d43ca4b4e22fd2eb01c223088f8237f55e91be4 not found: ID does not exist" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.395319 5107 scope.go:117] "RemoveContainer" containerID="8c18c6bed23088ffc76fd043c2c7d5d9712210c38611642bd7f7ebc5810ab221" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.395571 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c18c6bed23088ffc76fd043c2c7d5d9712210c38611642bd7f7ebc5810ab221"} err="failed to get container status \"8c18c6bed23088ffc76fd043c2c7d5d9712210c38611642bd7f7ebc5810ab221\": rpc error: code = NotFound desc = could not find container \"8c18c6bed23088ffc76fd043c2c7d5d9712210c38611642bd7f7ebc5810ab221\": container with ID starting with 8c18c6bed23088ffc76fd043c2c7d5d9712210c38611642bd7f7ebc5810ab221 not found: ID does not exist" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.395597 5107 scope.go:117] "RemoveContainer" containerID="25564c5cf5484e1a41682c97f3809d0f43a3b4f650c2de6755b1f61cf496646b" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.395831 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25564c5cf5484e1a41682c97f3809d0f43a3b4f650c2de6755b1f61cf496646b"} err="failed to get container status \"25564c5cf5484e1a41682c97f3809d0f43a3b4f650c2de6755b1f61cf496646b\": rpc error: code = NotFound desc = could not find container \"25564c5cf5484e1a41682c97f3809d0f43a3b4f650c2de6755b1f61cf496646b\": container with ID starting with 25564c5cf5484e1a41682c97f3809d0f43a3b4f650c2de6755b1f61cf496646b not found: ID does not exist" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.395856 5107 scope.go:117] "RemoveContainer" containerID="f03256890d3adb9b2fac5c7bdc862ddf606134e007253842b1682a19e541658a" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.396047 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f03256890d3adb9b2fac5c7bdc862ddf606134e007253842b1682a19e541658a"} err="failed to get container status \"f03256890d3adb9b2fac5c7bdc862ddf606134e007253842b1682a19e541658a\": rpc error: code = NotFound desc = could not find container \"f03256890d3adb9b2fac5c7bdc862ddf606134e007253842b1682a19e541658a\": container with ID starting with f03256890d3adb9b2fac5c7bdc862ddf606134e007253842b1682a19e541658a not found: ID does not exist" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.396069 5107 scope.go:117] "RemoveContainer" containerID="2b0bb45e7cfd1d29d0c683f330b9ec1bc1b9e55ea43414f5d3547a56c18f5be2" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.396274 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b0bb45e7cfd1d29d0c683f330b9ec1bc1b9e55ea43414f5d3547a56c18f5be2"} err="failed to get container status \"2b0bb45e7cfd1d29d0c683f330b9ec1bc1b9e55ea43414f5d3547a56c18f5be2\": rpc error: code = NotFound desc = could not find container \"2b0bb45e7cfd1d29d0c683f330b9ec1bc1b9e55ea43414f5d3547a56c18f5be2\": container with ID starting with 2b0bb45e7cfd1d29d0c683f330b9ec1bc1b9e55ea43414f5d3547a56c18f5be2 not found: ID does not exist" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.396297 5107 scope.go:117] "RemoveContainer" containerID="84b6d2404a4128c8b4f10eaac65fd21b1f2f5d59cb8517a27e2dc41709a2e709" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.396524 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84b6d2404a4128c8b4f10eaac65fd21b1f2f5d59cb8517a27e2dc41709a2e709"} err="failed to get container status \"84b6d2404a4128c8b4f10eaac65fd21b1f2f5d59cb8517a27e2dc41709a2e709\": rpc error: code = NotFound desc = could not find container \"84b6d2404a4128c8b4f10eaac65fd21b1f2f5d59cb8517a27e2dc41709a2e709\": container with ID starting with 84b6d2404a4128c8b4f10eaac65fd21b1f2f5d59cb8517a27e2dc41709a2e709 not found: ID does not exist" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.396543 5107 scope.go:117] "RemoveContainer" containerID="0910dc324553e84e9dcd17c953a31b1e5247d9a03b9f3acc69353f0a8162c2c3" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.396732 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0910dc324553e84e9dcd17c953a31b1e5247d9a03b9f3acc69353f0a8162c2c3"} err="failed to get container status \"0910dc324553e84e9dcd17c953a31b1e5247d9a03b9f3acc69353f0a8162c2c3\": rpc error: code = NotFound desc = could not find container \"0910dc324553e84e9dcd17c953a31b1e5247d9a03b9f3acc69353f0a8162c2c3\": container with ID starting with 0910dc324553e84e9dcd17c953a31b1e5247d9a03b9f3acc69353f0a8162c2c3 not found: ID does not exist" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.396749 5107 scope.go:117] "RemoveContainer" containerID="8471ce2736ffcdf0a726fb416f7052bea6d7a64059254fadad2f2c5d7a22a6bf" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.396890 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8471ce2736ffcdf0a726fb416f7052bea6d7a64059254fadad2f2c5d7a22a6bf"} err="failed to get container status \"8471ce2736ffcdf0a726fb416f7052bea6d7a64059254fadad2f2c5d7a22a6bf\": rpc error: code = NotFound desc = could not find container \"8471ce2736ffcdf0a726fb416f7052bea6d7a64059254fadad2f2c5d7a22a6bf\": container with ID starting with 8471ce2736ffcdf0a726fb416f7052bea6d7a64059254fadad2f2c5d7a22a6bf not found: ID does not exist" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.396909 5107 scope.go:117] "RemoveContainer" containerID="30fb1cef95981187208b2322962356f128220ccbce6500a3985bbe99ab43c3d9" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.397050 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30fb1cef95981187208b2322962356f128220ccbce6500a3985bbe99ab43c3d9"} err="failed to get container status \"30fb1cef95981187208b2322962356f128220ccbce6500a3985bbe99ab43c3d9\": rpc error: code = NotFound desc = could not find container \"30fb1cef95981187208b2322962356f128220ccbce6500a3985bbe99ab43c3d9\": container with ID starting with 30fb1cef95981187208b2322962356f128220ccbce6500a3985bbe99ab43c3d9 not found: ID does not exist" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.397071 5107 scope.go:117] "RemoveContainer" containerID="9d9242463b71252c18cb08500d43ca4b4e22fd2eb01c223088f8237f55e91be4" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.397265 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d9242463b71252c18cb08500d43ca4b4e22fd2eb01c223088f8237f55e91be4"} err="failed to get container status \"9d9242463b71252c18cb08500d43ca4b4e22fd2eb01c223088f8237f55e91be4\": rpc error: code = NotFound desc = could not find container \"9d9242463b71252c18cb08500d43ca4b4e22fd2eb01c223088f8237f55e91be4\": container with ID starting with 9d9242463b71252c18cb08500d43ca4b4e22fd2eb01c223088f8237f55e91be4 not found: ID does not exist" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.397284 5107 scope.go:117] "RemoveContainer" containerID="8c18c6bed23088ffc76fd043c2c7d5d9712210c38611642bd7f7ebc5810ab221" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.397508 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c18c6bed23088ffc76fd043c2c7d5d9712210c38611642bd7f7ebc5810ab221"} err="failed to get container status \"8c18c6bed23088ffc76fd043c2c7d5d9712210c38611642bd7f7ebc5810ab221\": rpc error: code = NotFound desc = could not find container \"8c18c6bed23088ffc76fd043c2c7d5d9712210c38611642bd7f7ebc5810ab221\": container with ID starting with 8c18c6bed23088ffc76fd043c2c7d5d9712210c38611642bd7f7ebc5810ab221 not found: ID does not exist" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.397537 5107 scope.go:117] "RemoveContainer" containerID="25564c5cf5484e1a41682c97f3809d0f43a3b4f650c2de6755b1f61cf496646b" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.397754 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25564c5cf5484e1a41682c97f3809d0f43a3b4f650c2de6755b1f61cf496646b"} err="failed to get container status \"25564c5cf5484e1a41682c97f3809d0f43a3b4f650c2de6755b1f61cf496646b\": rpc error: code = NotFound desc = could not find container \"25564c5cf5484e1a41682c97f3809d0f43a3b4f650c2de6755b1f61cf496646b\": container with ID starting with 25564c5cf5484e1a41682c97f3809d0f43a3b4f650c2de6755b1f61cf496646b not found: ID does not exist" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.397776 5107 scope.go:117] "RemoveContainer" containerID="f03256890d3adb9b2fac5c7bdc862ddf606134e007253842b1682a19e541658a" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.397992 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f03256890d3adb9b2fac5c7bdc862ddf606134e007253842b1682a19e541658a"} err="failed to get container status \"f03256890d3adb9b2fac5c7bdc862ddf606134e007253842b1682a19e541658a\": rpc error: code = NotFound desc = could not find container \"f03256890d3adb9b2fac5c7bdc862ddf606134e007253842b1682a19e541658a\": container with ID starting with f03256890d3adb9b2fac5c7bdc862ddf606134e007253842b1682a19e541658a not found: ID does not exist" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.398015 5107 scope.go:117] "RemoveContainer" containerID="2b0bb45e7cfd1d29d0c683f330b9ec1bc1b9e55ea43414f5d3547a56c18f5be2" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.398174 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b0bb45e7cfd1d29d0c683f330b9ec1bc1b9e55ea43414f5d3547a56c18f5be2"} err="failed to get container status \"2b0bb45e7cfd1d29d0c683f330b9ec1bc1b9e55ea43414f5d3547a56c18f5be2\": rpc error: code = NotFound desc = could not find container \"2b0bb45e7cfd1d29d0c683f330b9ec1bc1b9e55ea43414f5d3547a56c18f5be2\": container with ID starting with 2b0bb45e7cfd1d29d0c683f330b9ec1bc1b9e55ea43414f5d3547a56c18f5be2 not found: ID does not exist" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.398195 5107 scope.go:117] "RemoveContainer" containerID="84b6d2404a4128c8b4f10eaac65fd21b1f2f5d59cb8517a27e2dc41709a2e709" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.398364 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84b6d2404a4128c8b4f10eaac65fd21b1f2f5d59cb8517a27e2dc41709a2e709"} err="failed to get container status \"84b6d2404a4128c8b4f10eaac65fd21b1f2f5d59cb8517a27e2dc41709a2e709\": rpc error: code = NotFound desc = could not find container \"84b6d2404a4128c8b4f10eaac65fd21b1f2f5d59cb8517a27e2dc41709a2e709\": container with ID starting with 84b6d2404a4128c8b4f10eaac65fd21b1f2f5d59cb8517a27e2dc41709a2e709 not found: ID does not exist" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.398387 5107 scope.go:117] "RemoveContainer" containerID="0910dc324553e84e9dcd17c953a31b1e5247d9a03b9f3acc69353f0a8162c2c3" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.398543 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0910dc324553e84e9dcd17c953a31b1e5247d9a03b9f3acc69353f0a8162c2c3"} err="failed to get container status \"0910dc324553e84e9dcd17c953a31b1e5247d9a03b9f3acc69353f0a8162c2c3\": rpc error: code = NotFound desc = could not find container \"0910dc324553e84e9dcd17c953a31b1e5247d9a03b9f3acc69353f0a8162c2c3\": container with ID starting with 0910dc324553e84e9dcd17c953a31b1e5247d9a03b9f3acc69353f0a8162c2c3 not found: ID does not exist" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.398563 5107 scope.go:117] "RemoveContainer" containerID="8471ce2736ffcdf0a726fb416f7052bea6d7a64059254fadad2f2c5d7a22a6bf" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.398779 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8471ce2736ffcdf0a726fb416f7052bea6d7a64059254fadad2f2c5d7a22a6bf"} err="failed to get container status \"8471ce2736ffcdf0a726fb416f7052bea6d7a64059254fadad2f2c5d7a22a6bf\": rpc error: code = NotFound desc = could not find container \"8471ce2736ffcdf0a726fb416f7052bea6d7a64059254fadad2f2c5d7a22a6bf\": container with ID starting with 8471ce2736ffcdf0a726fb416f7052bea6d7a64059254fadad2f2c5d7a22a6bf not found: ID does not exist" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.398799 5107 scope.go:117] "RemoveContainer" containerID="30fb1cef95981187208b2322962356f128220ccbce6500a3985bbe99ab43c3d9" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.398946 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30fb1cef95981187208b2322962356f128220ccbce6500a3985bbe99ab43c3d9"} err="failed to get container status \"30fb1cef95981187208b2322962356f128220ccbce6500a3985bbe99ab43c3d9\": rpc error: code = NotFound desc = could not find container \"30fb1cef95981187208b2322962356f128220ccbce6500a3985bbe99ab43c3d9\": container with ID starting with 30fb1cef95981187208b2322962356f128220ccbce6500a3985bbe99ab43c3d9 not found: ID does not exist" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.399028 5107 scope.go:117] "RemoveContainer" containerID="9d9242463b71252c18cb08500d43ca4b4e22fd2eb01c223088f8237f55e91be4" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.399322 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d9242463b71252c18cb08500d43ca4b4e22fd2eb01c223088f8237f55e91be4"} err="failed to get container status \"9d9242463b71252c18cb08500d43ca4b4e22fd2eb01c223088f8237f55e91be4\": rpc error: code = NotFound desc = could not find container \"9d9242463b71252c18cb08500d43ca4b4e22fd2eb01c223088f8237f55e91be4\": container with ID starting with 9d9242463b71252c18cb08500d43ca4b4e22fd2eb01c223088f8237f55e91be4 not found: ID does not exist" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.399413 5107 scope.go:117] "RemoveContainer" containerID="8c18c6bed23088ffc76fd043c2c7d5d9712210c38611642bd7f7ebc5810ab221" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.399688 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c18c6bed23088ffc76fd043c2c7d5d9712210c38611642bd7f7ebc5810ab221"} err="failed to get container status \"8c18c6bed23088ffc76fd043c2c7d5d9712210c38611642bd7f7ebc5810ab221\": rpc error: code = NotFound desc = could not find container \"8c18c6bed23088ffc76fd043c2c7d5d9712210c38611642bd7f7ebc5810ab221\": container with ID starting with 8c18c6bed23088ffc76fd043c2c7d5d9712210c38611642bd7f7ebc5810ab221 not found: ID does not exist" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.399710 5107 scope.go:117] "RemoveContainer" containerID="25564c5cf5484e1a41682c97f3809d0f43a3b4f650c2de6755b1f61cf496646b" Dec 09 15:06:51 crc kubenswrapper[5107]: I1209 15:06:51.399921 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25564c5cf5484e1a41682c97f3809d0f43a3b4f650c2de6755b1f61cf496646b"} err="failed to get container status \"25564c5cf5484e1a41682c97f3809d0f43a3b4f650c2de6755b1f61cf496646b\": rpc error: code = NotFound desc = could not find container \"25564c5cf5484e1a41682c97f3809d0f43a3b4f650c2de6755b1f61cf496646b\": container with ID starting with 25564c5cf5484e1a41682c97f3809d0f43a3b4f650c2de6755b1f61cf496646b not found: ID does not exist" Dec 09 15:06:52 crc kubenswrapper[5107]: I1209 15:06:52.166609 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g7sv4_357946f5-b5ee-4739-a2c3-62beb5aedb57/kube-multus/0.log" Dec 09 15:06:52 crc kubenswrapper[5107]: I1209 15:06:52.167616 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g7sv4" event={"ID":"357946f5-b5ee-4739-a2c3-62beb5aedb57","Type":"ContainerStarted","Data":"53b8bf4738c74e4544464b0c3175d42021384bdf8afe5ce2eb3415e628cc301a"} Dec 09 15:06:52 crc kubenswrapper[5107]: I1209 15:06:52.169559 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" event={"ID":"44841c2b-1739-439c-ad17-a65d8c3d1a6f","Type":"ContainerStarted","Data":"69b692c6419fab9fb17d25173d46db996cd3169235690b1a87f0fd5a2b4707ac"} Dec 09 15:06:52 crc kubenswrapper[5107]: I1209 15:06:52.828202 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="035458af-eba0-4241-bcac-4e11d6358b21" path="/var/lib/kubelet/pods/035458af-eba0-4241-bcac-4e11d6358b21/volumes" Dec 09 15:06:52 crc kubenswrapper[5107]: I1209 15:06:52.829656 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b75d4675-9c37-47cf-8fa3-11097aa379ca" path="/var/lib/kubelet/pods/b75d4675-9c37-47cf-8fa3-11097aa379ca/volumes" Dec 09 15:06:53 crc kubenswrapper[5107]: I1209 15:06:53.023257 5107 scope.go:117] "RemoveContainer" containerID="d4a3745133e65e2be2bd2cbac84be5bd1275f003ea6f2c4be8b1ccb707efe066" Dec 09 15:06:53 crc kubenswrapper[5107]: I1209 15:06:53.058529 5107 scope.go:117] "RemoveContainer" containerID="8eb899657bdc52cb8444c544352a0c306b439d5fe4c54705d994a6ac368a93e8" Dec 09 15:06:53 crc kubenswrapper[5107]: I1209 15:06:53.179425 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" event={"ID":"44841c2b-1739-439c-ad17-a65d8c3d1a6f","Type":"ContainerStarted","Data":"4489b830024bab8427faf41c7fcd03dad651c8c708dfe35483910d2358ef7012"} Dec 09 15:06:53 crc kubenswrapper[5107]: I1209 15:06:53.179486 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" event={"ID":"44841c2b-1739-439c-ad17-a65d8c3d1a6f","Type":"ContainerStarted","Data":"e3fc966d1f3aa5152621523faf5091d3fc034781069613735028d5fb4a3dba24"} Dec 09 15:06:53 crc kubenswrapper[5107]: I1209 15:06:53.179502 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" event={"ID":"44841c2b-1739-439c-ad17-a65d8c3d1a6f","Type":"ContainerStarted","Data":"80223e4e0805c33f1d6f43b31245c24bdc1c1f27d0ad9f5a9ac1d6a2a66a786b"} Dec 09 15:06:53 crc kubenswrapper[5107]: I1209 15:06:53.179548 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" event={"ID":"44841c2b-1739-439c-ad17-a65d8c3d1a6f","Type":"ContainerStarted","Data":"2fc35e13eff7edb99a490d66693ee88f8eda683eb6f6af01d4e6dc31d183fd2b"} Dec 09 15:06:54 crc kubenswrapper[5107]: I1209 15:06:54.195150 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" event={"ID":"44841c2b-1739-439c-ad17-a65d8c3d1a6f","Type":"ContainerStarted","Data":"1769b4f961c2710a2250893c46ae41219ae094771d25628580fa9dbb1b059d5a"} Dec 09 15:06:56 crc kubenswrapper[5107]: I1209 15:06:56.213124 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" event={"ID":"44841c2b-1739-439c-ad17-a65d8c3d1a6f","Type":"ContainerStarted","Data":"f46c869489cf0e08125059814188297d53a4a6c47f04c7b4a1d5213b561641cc"} Dec 09 15:07:00 crc kubenswrapper[5107]: I1209 15:07:00.237144 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" event={"ID":"44841c2b-1739-439c-ad17-a65d8c3d1a6f","Type":"ContainerStarted","Data":"a3cf5a2a7d87694547161087a6407c6b46fafdff052f8b541666437953dbc296"} Dec 09 15:07:00 crc kubenswrapper[5107]: I1209 15:07:00.237609 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:07:00 crc kubenswrapper[5107]: I1209 15:07:00.267186 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:07:00 crc kubenswrapper[5107]: I1209 15:07:00.269570 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" podStartSLOduration=10.269554678 podStartE2EDuration="10.269554678s" podCreationTimestamp="2025-12-09 15:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 15:07:00.267780608 +0000 UTC m=+667.991485497" watchObservedRunningTime="2025-12-09 15:07:00.269554678 +0000 UTC m=+667.993259567" Dec 09 15:07:01 crc kubenswrapper[5107]: I1209 15:07:01.242276 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:07:01 crc kubenswrapper[5107]: I1209 15:07:01.242690 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:07:01 crc kubenswrapper[5107]: I1209 15:07:01.272647 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:07:33 crc kubenswrapper[5107]: I1209 15:07:33.270057 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vsnlz" Dec 09 15:08:02 crc kubenswrapper[5107]: I1209 15:08:02.797311 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-skk5r"] Dec 09 15:08:02 crc kubenswrapper[5107]: I1209 15:08:02.798963 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-skk5r" podUID="a92286b2-fcc7-4fac-bcb9-75abe429385d" containerName="registry-server" containerID="cri-o://a700dd0f17cb5d3b59087865b5e7639c23f2be44878671c0140509de52acbcc5" gracePeriod=30 Dec 09 15:08:03 crc kubenswrapper[5107]: I1209 15:08:03.131254 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-skk5r" Dec 09 15:08:03 crc kubenswrapper[5107]: I1209 15:08:03.248727 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a92286b2-fcc7-4fac-bcb9-75abe429385d-utilities\") pod \"a92286b2-fcc7-4fac-bcb9-75abe429385d\" (UID: \"a92286b2-fcc7-4fac-bcb9-75abe429385d\") " Dec 09 15:08:03 crc kubenswrapper[5107]: I1209 15:08:03.248898 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4s9fc\" (UniqueName: \"kubernetes.io/projected/a92286b2-fcc7-4fac-bcb9-75abe429385d-kube-api-access-4s9fc\") pod \"a92286b2-fcc7-4fac-bcb9-75abe429385d\" (UID: \"a92286b2-fcc7-4fac-bcb9-75abe429385d\") " Dec 09 15:08:03 crc kubenswrapper[5107]: I1209 15:08:03.249054 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a92286b2-fcc7-4fac-bcb9-75abe429385d-catalog-content\") pod \"a92286b2-fcc7-4fac-bcb9-75abe429385d\" (UID: \"a92286b2-fcc7-4fac-bcb9-75abe429385d\") " Dec 09 15:08:03 crc kubenswrapper[5107]: I1209 15:08:03.250028 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a92286b2-fcc7-4fac-bcb9-75abe429385d-utilities" (OuterVolumeSpecName: "utilities") pod "a92286b2-fcc7-4fac-bcb9-75abe429385d" (UID: "a92286b2-fcc7-4fac-bcb9-75abe429385d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:08:03 crc kubenswrapper[5107]: I1209 15:08:03.256010 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a92286b2-fcc7-4fac-bcb9-75abe429385d-kube-api-access-4s9fc" (OuterVolumeSpecName: "kube-api-access-4s9fc") pod "a92286b2-fcc7-4fac-bcb9-75abe429385d" (UID: "a92286b2-fcc7-4fac-bcb9-75abe429385d"). InnerVolumeSpecName "kube-api-access-4s9fc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 15:08:03 crc kubenswrapper[5107]: I1209 15:08:03.261932 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a92286b2-fcc7-4fac-bcb9-75abe429385d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a92286b2-fcc7-4fac-bcb9-75abe429385d" (UID: "a92286b2-fcc7-4fac-bcb9-75abe429385d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:08:03 crc kubenswrapper[5107]: I1209 15:08:03.350529 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4s9fc\" (UniqueName: \"kubernetes.io/projected/a92286b2-fcc7-4fac-bcb9-75abe429385d-kube-api-access-4s9fc\") on node \"crc\" DevicePath \"\"" Dec 09 15:08:03 crc kubenswrapper[5107]: I1209 15:08:03.350559 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a92286b2-fcc7-4fac-bcb9-75abe429385d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 15:08:03 crc kubenswrapper[5107]: I1209 15:08:03.350568 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a92286b2-fcc7-4fac-bcb9-75abe429385d-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 15:08:03 crc kubenswrapper[5107]: I1209 15:08:03.600813 5107 generic.go:358] "Generic (PLEG): container finished" podID="a92286b2-fcc7-4fac-bcb9-75abe429385d" containerID="a700dd0f17cb5d3b59087865b5e7639c23f2be44878671c0140509de52acbcc5" exitCode=0 Dec 09 15:08:03 crc kubenswrapper[5107]: I1209 15:08:03.600854 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-skk5r" event={"ID":"a92286b2-fcc7-4fac-bcb9-75abe429385d","Type":"ContainerDied","Data":"a700dd0f17cb5d3b59087865b5e7639c23f2be44878671c0140509de52acbcc5"} Dec 09 15:08:03 crc kubenswrapper[5107]: I1209 15:08:03.600913 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-skk5r" event={"ID":"a92286b2-fcc7-4fac-bcb9-75abe429385d","Type":"ContainerDied","Data":"834af3aa39a91e9db02a20a3febccbeb49bfea31439f1ef696a3de103716f08b"} Dec 09 15:08:03 crc kubenswrapper[5107]: I1209 15:08:03.600938 5107 scope.go:117] "RemoveContainer" containerID="a700dd0f17cb5d3b59087865b5e7639c23f2be44878671c0140509de52acbcc5" Dec 09 15:08:03 crc kubenswrapper[5107]: I1209 15:08:03.600970 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-skk5r" Dec 09 15:08:03 crc kubenswrapper[5107]: I1209 15:08:03.619005 5107 scope.go:117] "RemoveContainer" containerID="8299fc93cf6dafe1d85a67163229d6618bc4c11149c19c98373f84b97892d784" Dec 09 15:08:03 crc kubenswrapper[5107]: I1209 15:08:03.635531 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-skk5r"] Dec 09 15:08:03 crc kubenswrapper[5107]: I1209 15:08:03.639232 5107 scope.go:117] "RemoveContainer" containerID="4cf665a7b35757212f45c01c417e6733d9aa7d61a8dc699c584b06848fe7c53a" Dec 09 15:08:03 crc kubenswrapper[5107]: I1209 15:08:03.639969 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-skk5r"] Dec 09 15:08:03 crc kubenswrapper[5107]: I1209 15:08:03.661063 5107 scope.go:117] "RemoveContainer" containerID="a700dd0f17cb5d3b59087865b5e7639c23f2be44878671c0140509de52acbcc5" Dec 09 15:08:03 crc kubenswrapper[5107]: E1209 15:08:03.661451 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a700dd0f17cb5d3b59087865b5e7639c23f2be44878671c0140509de52acbcc5\": container with ID starting with a700dd0f17cb5d3b59087865b5e7639c23f2be44878671c0140509de52acbcc5 not found: ID does not exist" containerID="a700dd0f17cb5d3b59087865b5e7639c23f2be44878671c0140509de52acbcc5" Dec 09 15:08:03 crc kubenswrapper[5107]: I1209 15:08:03.661491 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a700dd0f17cb5d3b59087865b5e7639c23f2be44878671c0140509de52acbcc5"} err="failed to get container status \"a700dd0f17cb5d3b59087865b5e7639c23f2be44878671c0140509de52acbcc5\": rpc error: code = NotFound desc = could not find container \"a700dd0f17cb5d3b59087865b5e7639c23f2be44878671c0140509de52acbcc5\": container with ID starting with a700dd0f17cb5d3b59087865b5e7639c23f2be44878671c0140509de52acbcc5 not found: ID does not exist" Dec 09 15:08:03 crc kubenswrapper[5107]: I1209 15:08:03.661520 5107 scope.go:117] "RemoveContainer" containerID="8299fc93cf6dafe1d85a67163229d6618bc4c11149c19c98373f84b97892d784" Dec 09 15:08:03 crc kubenswrapper[5107]: E1209 15:08:03.661850 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8299fc93cf6dafe1d85a67163229d6618bc4c11149c19c98373f84b97892d784\": container with ID starting with 8299fc93cf6dafe1d85a67163229d6618bc4c11149c19c98373f84b97892d784 not found: ID does not exist" containerID="8299fc93cf6dafe1d85a67163229d6618bc4c11149c19c98373f84b97892d784" Dec 09 15:08:03 crc kubenswrapper[5107]: I1209 15:08:03.661892 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8299fc93cf6dafe1d85a67163229d6618bc4c11149c19c98373f84b97892d784"} err="failed to get container status \"8299fc93cf6dafe1d85a67163229d6618bc4c11149c19c98373f84b97892d784\": rpc error: code = NotFound desc = could not find container \"8299fc93cf6dafe1d85a67163229d6618bc4c11149c19c98373f84b97892d784\": container with ID starting with 8299fc93cf6dafe1d85a67163229d6618bc4c11149c19c98373f84b97892d784 not found: ID does not exist" Dec 09 15:08:03 crc kubenswrapper[5107]: I1209 15:08:03.661918 5107 scope.go:117] "RemoveContainer" containerID="4cf665a7b35757212f45c01c417e6733d9aa7d61a8dc699c584b06848fe7c53a" Dec 09 15:08:03 crc kubenswrapper[5107]: E1209 15:08:03.662736 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cf665a7b35757212f45c01c417e6733d9aa7d61a8dc699c584b06848fe7c53a\": container with ID starting with 4cf665a7b35757212f45c01c417e6733d9aa7d61a8dc699c584b06848fe7c53a not found: ID does not exist" containerID="4cf665a7b35757212f45c01c417e6733d9aa7d61a8dc699c584b06848fe7c53a" Dec 09 15:08:03 crc kubenswrapper[5107]: I1209 15:08:03.662761 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cf665a7b35757212f45c01c417e6733d9aa7d61a8dc699c584b06848fe7c53a"} err="failed to get container status \"4cf665a7b35757212f45c01c417e6733d9aa7d61a8dc699c584b06848fe7c53a\": rpc error: code = NotFound desc = could not find container \"4cf665a7b35757212f45c01c417e6733d9aa7d61a8dc699c584b06848fe7c53a\": container with ID starting with 4cf665a7b35757212f45c01c417e6733d9aa7d61a8dc699c584b06848fe7c53a not found: ID does not exist" Dec 09 15:08:03 crc kubenswrapper[5107]: I1209 15:08:03.836488 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-9mv45"] Dec 09 15:08:03 crc kubenswrapper[5107]: I1209 15:08:03.837004 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a92286b2-fcc7-4fac-bcb9-75abe429385d" containerName="extract-utilities" Dec 09 15:08:03 crc kubenswrapper[5107]: I1209 15:08:03.837016 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="a92286b2-fcc7-4fac-bcb9-75abe429385d" containerName="extract-utilities" Dec 09 15:08:03 crc kubenswrapper[5107]: I1209 15:08:03.837037 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a92286b2-fcc7-4fac-bcb9-75abe429385d" containerName="registry-server" Dec 09 15:08:03 crc kubenswrapper[5107]: I1209 15:08:03.837043 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="a92286b2-fcc7-4fac-bcb9-75abe429385d" containerName="registry-server" Dec 09 15:08:03 crc kubenswrapper[5107]: I1209 15:08:03.837059 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a92286b2-fcc7-4fac-bcb9-75abe429385d" containerName="extract-content" Dec 09 15:08:03 crc kubenswrapper[5107]: I1209 15:08:03.837065 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="a92286b2-fcc7-4fac-bcb9-75abe429385d" containerName="extract-content" Dec 09 15:08:03 crc kubenswrapper[5107]: I1209 15:08:03.837140 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="a92286b2-fcc7-4fac-bcb9-75abe429385d" containerName="registry-server" Dec 09 15:08:03 crc kubenswrapper[5107]: I1209 15:08:03.846269 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-9mv45" Dec 09 15:08:03 crc kubenswrapper[5107]: I1209 15:08:03.854802 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-9mv45"] Dec 09 15:08:03 crc kubenswrapper[5107]: I1209 15:08:03.958449 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/970ad539-430c-406a-8786-51f150cea307-registry-certificates\") pod \"image-registry-5d9d95bf5b-9mv45\" (UID: \"970ad539-430c-406a-8786-51f150cea307\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-9mv45" Dec 09 15:08:03 crc kubenswrapper[5107]: I1209 15:08:03.958633 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/970ad539-430c-406a-8786-51f150cea307-registry-tls\") pod \"image-registry-5d9d95bf5b-9mv45\" (UID: \"970ad539-430c-406a-8786-51f150cea307\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-9mv45" Dec 09 15:08:03 crc kubenswrapper[5107]: I1209 15:08:03.958685 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/970ad539-430c-406a-8786-51f150cea307-bound-sa-token\") pod \"image-registry-5d9d95bf5b-9mv45\" (UID: \"970ad539-430c-406a-8786-51f150cea307\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-9mv45" Dec 09 15:08:03 crc kubenswrapper[5107]: I1209 15:08:03.958720 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/970ad539-430c-406a-8786-51f150cea307-trusted-ca\") pod \"image-registry-5d9d95bf5b-9mv45\" (UID: \"970ad539-430c-406a-8786-51f150cea307\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-9mv45" Dec 09 15:08:03 crc kubenswrapper[5107]: I1209 15:08:03.958854 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-9mv45\" (UID: \"970ad539-430c-406a-8786-51f150cea307\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-9mv45" Dec 09 15:08:03 crc kubenswrapper[5107]: I1209 15:08:03.958897 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb2c2\" (UniqueName: \"kubernetes.io/projected/970ad539-430c-406a-8786-51f150cea307-kube-api-access-vb2c2\") pod \"image-registry-5d9d95bf5b-9mv45\" (UID: \"970ad539-430c-406a-8786-51f150cea307\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-9mv45" Dec 09 15:08:03 crc kubenswrapper[5107]: I1209 15:08:03.958936 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/970ad539-430c-406a-8786-51f150cea307-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-9mv45\" (UID: \"970ad539-430c-406a-8786-51f150cea307\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-9mv45" Dec 09 15:08:03 crc kubenswrapper[5107]: I1209 15:08:03.958970 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/970ad539-430c-406a-8786-51f150cea307-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-9mv45\" (UID: \"970ad539-430c-406a-8786-51f150cea307\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-9mv45" Dec 09 15:08:03 crc kubenswrapper[5107]: I1209 15:08:03.979617 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-9mv45\" (UID: \"970ad539-430c-406a-8786-51f150cea307\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-9mv45" Dec 09 15:08:04 crc kubenswrapper[5107]: I1209 15:08:04.059848 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vb2c2\" (UniqueName: \"kubernetes.io/projected/970ad539-430c-406a-8786-51f150cea307-kube-api-access-vb2c2\") pod \"image-registry-5d9d95bf5b-9mv45\" (UID: \"970ad539-430c-406a-8786-51f150cea307\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-9mv45" Dec 09 15:08:04 crc kubenswrapper[5107]: I1209 15:08:04.059891 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/970ad539-430c-406a-8786-51f150cea307-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-9mv45\" (UID: \"970ad539-430c-406a-8786-51f150cea307\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-9mv45" Dec 09 15:08:04 crc kubenswrapper[5107]: I1209 15:08:04.059930 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/970ad539-430c-406a-8786-51f150cea307-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-9mv45\" (UID: \"970ad539-430c-406a-8786-51f150cea307\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-9mv45" Dec 09 15:08:04 crc kubenswrapper[5107]: I1209 15:08:04.060067 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/970ad539-430c-406a-8786-51f150cea307-registry-certificates\") pod \"image-registry-5d9d95bf5b-9mv45\" (UID: \"970ad539-430c-406a-8786-51f150cea307\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-9mv45" Dec 09 15:08:04 crc kubenswrapper[5107]: I1209 15:08:04.060242 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/970ad539-430c-406a-8786-51f150cea307-registry-tls\") pod \"image-registry-5d9d95bf5b-9mv45\" (UID: \"970ad539-430c-406a-8786-51f150cea307\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-9mv45" Dec 09 15:08:04 crc kubenswrapper[5107]: I1209 15:08:04.060261 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/970ad539-430c-406a-8786-51f150cea307-bound-sa-token\") pod \"image-registry-5d9d95bf5b-9mv45\" (UID: \"970ad539-430c-406a-8786-51f150cea307\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-9mv45" Dec 09 15:08:04 crc kubenswrapper[5107]: I1209 15:08:04.060279 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/970ad539-430c-406a-8786-51f150cea307-trusted-ca\") pod \"image-registry-5d9d95bf5b-9mv45\" (UID: \"970ad539-430c-406a-8786-51f150cea307\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-9mv45" Dec 09 15:08:04 crc kubenswrapper[5107]: I1209 15:08:04.060352 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/970ad539-430c-406a-8786-51f150cea307-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-9mv45\" (UID: \"970ad539-430c-406a-8786-51f150cea307\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-9mv45" Dec 09 15:08:04 crc kubenswrapper[5107]: I1209 15:08:04.061520 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/970ad539-430c-406a-8786-51f150cea307-registry-certificates\") pod \"image-registry-5d9d95bf5b-9mv45\" (UID: \"970ad539-430c-406a-8786-51f150cea307\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-9mv45" Dec 09 15:08:04 crc kubenswrapper[5107]: I1209 15:08:04.061590 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/970ad539-430c-406a-8786-51f150cea307-trusted-ca\") pod \"image-registry-5d9d95bf5b-9mv45\" (UID: \"970ad539-430c-406a-8786-51f150cea307\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-9mv45" Dec 09 15:08:04 crc kubenswrapper[5107]: I1209 15:08:04.066057 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/970ad539-430c-406a-8786-51f150cea307-registry-tls\") pod \"image-registry-5d9d95bf5b-9mv45\" (UID: \"970ad539-430c-406a-8786-51f150cea307\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-9mv45" Dec 09 15:08:04 crc kubenswrapper[5107]: I1209 15:08:04.066133 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/970ad539-430c-406a-8786-51f150cea307-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-9mv45\" (UID: \"970ad539-430c-406a-8786-51f150cea307\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-9mv45" Dec 09 15:08:04 crc kubenswrapper[5107]: I1209 15:08:04.078119 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb2c2\" (UniqueName: \"kubernetes.io/projected/970ad539-430c-406a-8786-51f150cea307-kube-api-access-vb2c2\") pod \"image-registry-5d9d95bf5b-9mv45\" (UID: \"970ad539-430c-406a-8786-51f150cea307\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-9mv45" Dec 09 15:08:04 crc kubenswrapper[5107]: I1209 15:08:04.078281 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/970ad539-430c-406a-8786-51f150cea307-bound-sa-token\") pod \"image-registry-5d9d95bf5b-9mv45\" (UID: \"970ad539-430c-406a-8786-51f150cea307\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-9mv45" Dec 09 15:08:04 crc kubenswrapper[5107]: I1209 15:08:04.161624 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-9mv45" Dec 09 15:08:04 crc kubenswrapper[5107]: I1209 15:08:04.370731 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-9mv45"] Dec 09 15:08:04 crc kubenswrapper[5107]: I1209 15:08:04.610801 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-9mv45" event={"ID":"970ad539-430c-406a-8786-51f150cea307","Type":"ContainerStarted","Data":"81e7fe3a9f51daa66df32da7409d5081521c6bbc5c645dd7ab408dd15d521167"} Dec 09 15:08:04 crc kubenswrapper[5107]: I1209 15:08:04.610842 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-9mv45" event={"ID":"970ad539-430c-406a-8786-51f150cea307","Type":"ContainerStarted","Data":"68ad8b541ba491dcf2d28c23817543fff3c4ac9a5d812ce4cc713ef3b95c7f34"} Dec 09 15:08:04 crc kubenswrapper[5107]: I1209 15:08:04.611061 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-9mv45" Dec 09 15:08:04 crc kubenswrapper[5107]: I1209 15:08:04.633179 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-9mv45" podStartSLOduration=1.633164697 podStartE2EDuration="1.633164697s" podCreationTimestamp="2025-12-09 15:08:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 15:08:04.63183746 +0000 UTC m=+732.355542389" watchObservedRunningTime="2025-12-09 15:08:04.633164697 +0000 UTC m=+732.356869586" Dec 09 15:08:04 crc kubenswrapper[5107]: I1209 15:08:04.824572 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a92286b2-fcc7-4fac-bcb9-75abe429385d" path="/var/lib/kubelet/pods/a92286b2-fcc7-4fac-bcb9-75abe429385d/volumes" Dec 09 15:08:06 crc kubenswrapper[5107]: I1209 15:08:06.488664 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt7ns"] Dec 09 15:08:06 crc kubenswrapper[5107]: I1209 15:08:06.500841 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt7ns" Dec 09 15:08:06 crc kubenswrapper[5107]: I1209 15:08:06.501877 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt7ns"] Dec 09 15:08:06 crc kubenswrapper[5107]: I1209 15:08:06.503491 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Dec 09 15:08:06 crc kubenswrapper[5107]: I1209 15:08:06.592745 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/db74a3ff-2d00-4139-8227-bb29dc96ea44-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt7ns\" (UID: \"db74a3ff-2d00-4139-8227-bb29dc96ea44\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt7ns" Dec 09 15:08:06 crc kubenswrapper[5107]: I1209 15:08:06.592809 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzhg7\" (UniqueName: \"kubernetes.io/projected/db74a3ff-2d00-4139-8227-bb29dc96ea44-kube-api-access-vzhg7\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt7ns\" (UID: \"db74a3ff-2d00-4139-8227-bb29dc96ea44\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt7ns" Dec 09 15:08:06 crc kubenswrapper[5107]: I1209 15:08:06.592843 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/db74a3ff-2d00-4139-8227-bb29dc96ea44-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt7ns\" (UID: \"db74a3ff-2d00-4139-8227-bb29dc96ea44\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt7ns" Dec 09 15:08:06 crc kubenswrapper[5107]: I1209 15:08:06.694588 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/db74a3ff-2d00-4139-8227-bb29dc96ea44-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt7ns\" (UID: \"db74a3ff-2d00-4139-8227-bb29dc96ea44\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt7ns" Dec 09 15:08:06 crc kubenswrapper[5107]: I1209 15:08:06.694629 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vzhg7\" (UniqueName: \"kubernetes.io/projected/db74a3ff-2d00-4139-8227-bb29dc96ea44-kube-api-access-vzhg7\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt7ns\" (UID: \"db74a3ff-2d00-4139-8227-bb29dc96ea44\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt7ns" Dec 09 15:08:06 crc kubenswrapper[5107]: I1209 15:08:06.694655 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/db74a3ff-2d00-4139-8227-bb29dc96ea44-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt7ns\" (UID: \"db74a3ff-2d00-4139-8227-bb29dc96ea44\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt7ns" Dec 09 15:08:06 crc kubenswrapper[5107]: I1209 15:08:06.695159 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/db74a3ff-2d00-4139-8227-bb29dc96ea44-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt7ns\" (UID: \"db74a3ff-2d00-4139-8227-bb29dc96ea44\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt7ns" Dec 09 15:08:06 crc kubenswrapper[5107]: I1209 15:08:06.695280 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/db74a3ff-2d00-4139-8227-bb29dc96ea44-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt7ns\" (UID: \"db74a3ff-2d00-4139-8227-bb29dc96ea44\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt7ns" Dec 09 15:08:06 crc kubenswrapper[5107]: I1209 15:08:06.718418 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzhg7\" (UniqueName: \"kubernetes.io/projected/db74a3ff-2d00-4139-8227-bb29dc96ea44-kube-api-access-vzhg7\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt7ns\" (UID: \"db74a3ff-2d00-4139-8227-bb29dc96ea44\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt7ns" Dec 09 15:08:06 crc kubenswrapper[5107]: I1209 15:08:06.850960 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt7ns" Dec 09 15:08:07 crc kubenswrapper[5107]: I1209 15:08:07.067626 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt7ns"] Dec 09 15:08:07 crc kubenswrapper[5107]: I1209 15:08:07.628278 5107 generic.go:358] "Generic (PLEG): container finished" podID="db74a3ff-2d00-4139-8227-bb29dc96ea44" containerID="8f168c168f10d4e1467a99fdf5fda90b77fffdee064e0f6851ef9485457e7ceb" exitCode=0 Dec 09 15:08:07 crc kubenswrapper[5107]: I1209 15:08:07.628322 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt7ns" event={"ID":"db74a3ff-2d00-4139-8227-bb29dc96ea44","Type":"ContainerDied","Data":"8f168c168f10d4e1467a99fdf5fda90b77fffdee064e0f6851ef9485457e7ceb"} Dec 09 15:08:07 crc kubenswrapper[5107]: I1209 15:08:07.628391 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt7ns" event={"ID":"db74a3ff-2d00-4139-8227-bb29dc96ea44","Type":"ContainerStarted","Data":"24fefa51d26a6674fb3b8218e29b4e30cbadadfd3082089f411c19a4cc8a3405"} Dec 09 15:08:09 crc kubenswrapper[5107]: I1209 15:08:09.648642 5107 generic.go:358] "Generic (PLEG): container finished" podID="db74a3ff-2d00-4139-8227-bb29dc96ea44" containerID="b4f2a62975e88bb78422cd5ec3a9d5b524692191b1953f2efd587d95f6c74563" exitCode=0 Dec 09 15:08:09 crc kubenswrapper[5107]: I1209 15:08:09.648735 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt7ns" event={"ID":"db74a3ff-2d00-4139-8227-bb29dc96ea44","Type":"ContainerDied","Data":"b4f2a62975e88bb78422cd5ec3a9d5b524692191b1953f2efd587d95f6c74563"} Dec 09 15:08:10 crc kubenswrapper[5107]: I1209 15:08:10.661654 5107 generic.go:358] "Generic (PLEG): container finished" podID="db74a3ff-2d00-4139-8227-bb29dc96ea44" containerID="52b5ac0b4bc55bf84d1fc58ccf74c70bc7eaa3ef6c2e3ac0c8b03118937c78f2" exitCode=0 Dec 09 15:08:10 crc kubenswrapper[5107]: I1209 15:08:10.662536 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt7ns" event={"ID":"db74a3ff-2d00-4139-8227-bb29dc96ea44","Type":"ContainerDied","Data":"52b5ac0b4bc55bf84d1fc58ccf74c70bc7eaa3ef6c2e3ac0c8b03118937c78f2"} Dec 09 15:08:11 crc kubenswrapper[5107]: I1209 15:08:11.894542 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt7ns" Dec 09 15:08:11 crc kubenswrapper[5107]: I1209 15:08:11.967374 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzhg7\" (UniqueName: \"kubernetes.io/projected/db74a3ff-2d00-4139-8227-bb29dc96ea44-kube-api-access-vzhg7\") pod \"db74a3ff-2d00-4139-8227-bb29dc96ea44\" (UID: \"db74a3ff-2d00-4139-8227-bb29dc96ea44\") " Dec 09 15:08:11 crc kubenswrapper[5107]: I1209 15:08:11.967699 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/db74a3ff-2d00-4139-8227-bb29dc96ea44-bundle\") pod \"db74a3ff-2d00-4139-8227-bb29dc96ea44\" (UID: \"db74a3ff-2d00-4139-8227-bb29dc96ea44\") " Dec 09 15:08:11 crc kubenswrapper[5107]: I1209 15:08:11.967824 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/db74a3ff-2d00-4139-8227-bb29dc96ea44-util\") pod \"db74a3ff-2d00-4139-8227-bb29dc96ea44\" (UID: \"db74a3ff-2d00-4139-8227-bb29dc96ea44\") " Dec 09 15:08:11 crc kubenswrapper[5107]: I1209 15:08:11.969958 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db74a3ff-2d00-4139-8227-bb29dc96ea44-bundle" (OuterVolumeSpecName: "bundle") pod "db74a3ff-2d00-4139-8227-bb29dc96ea44" (UID: "db74a3ff-2d00-4139-8227-bb29dc96ea44"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:08:11 crc kubenswrapper[5107]: I1209 15:08:11.973996 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db74a3ff-2d00-4139-8227-bb29dc96ea44-kube-api-access-vzhg7" (OuterVolumeSpecName: "kube-api-access-vzhg7") pod "db74a3ff-2d00-4139-8227-bb29dc96ea44" (UID: "db74a3ff-2d00-4139-8227-bb29dc96ea44"). InnerVolumeSpecName "kube-api-access-vzhg7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 15:08:11 crc kubenswrapper[5107]: I1209 15:08:11.979090 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db74a3ff-2d00-4139-8227-bb29dc96ea44-util" (OuterVolumeSpecName: "util") pod "db74a3ff-2d00-4139-8227-bb29dc96ea44" (UID: "db74a3ff-2d00-4139-8227-bb29dc96ea44"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:08:12 crc kubenswrapper[5107]: I1209 15:08:12.069204 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vzhg7\" (UniqueName: \"kubernetes.io/projected/db74a3ff-2d00-4139-8227-bb29dc96ea44-kube-api-access-vzhg7\") on node \"crc\" DevicePath \"\"" Dec 09 15:08:12 crc kubenswrapper[5107]: I1209 15:08:12.069239 5107 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/db74a3ff-2d00-4139-8227-bb29dc96ea44-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 15:08:12 crc kubenswrapper[5107]: I1209 15:08:12.069248 5107 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/db74a3ff-2d00-4139-8227-bb29dc96ea44-util\") on node \"crc\" DevicePath \"\"" Dec 09 15:08:12 crc kubenswrapper[5107]: I1209 15:08:12.676883 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt7ns" event={"ID":"db74a3ff-2d00-4139-8227-bb29dc96ea44","Type":"ContainerDied","Data":"24fefa51d26a6674fb3b8218e29b4e30cbadadfd3082089f411c19a4cc8a3405"} Dec 09 15:08:12 crc kubenswrapper[5107]: I1209 15:08:12.676935 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qt7ns" Dec 09 15:08:12 crc kubenswrapper[5107]: I1209 15:08:12.676958 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24fefa51d26a6674fb3b8218e29b4e30cbadadfd3082089f411c19a4cc8a3405" Dec 09 15:08:16 crc kubenswrapper[5107]: I1209 15:08:16.802260 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffw9bb"] Dec 09 15:08:16 crc kubenswrapper[5107]: I1209 15:08:16.804431 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="db74a3ff-2d00-4139-8227-bb29dc96ea44" containerName="util" Dec 09 15:08:16 crc kubenswrapper[5107]: I1209 15:08:16.804536 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="db74a3ff-2d00-4139-8227-bb29dc96ea44" containerName="util" Dec 09 15:08:16 crc kubenswrapper[5107]: I1209 15:08:16.804608 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="db74a3ff-2d00-4139-8227-bb29dc96ea44" containerName="extract" Dec 09 15:08:16 crc kubenswrapper[5107]: I1209 15:08:16.804687 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="db74a3ff-2d00-4139-8227-bb29dc96ea44" containerName="extract" Dec 09 15:08:16 crc kubenswrapper[5107]: I1209 15:08:16.804765 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="db74a3ff-2d00-4139-8227-bb29dc96ea44" containerName="pull" Dec 09 15:08:16 crc kubenswrapper[5107]: I1209 15:08:16.804835 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="db74a3ff-2d00-4139-8227-bb29dc96ea44" containerName="pull" Dec 09 15:08:16 crc kubenswrapper[5107]: I1209 15:08:16.805026 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="db74a3ff-2d00-4139-8227-bb29dc96ea44" containerName="extract" Dec 09 15:08:16 crc kubenswrapper[5107]: I1209 15:08:16.809625 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffw9bb" Dec 09 15:08:16 crc kubenswrapper[5107]: I1209 15:08:16.820057 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Dec 09 15:08:16 crc kubenswrapper[5107]: I1209 15:08:16.828382 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffw9bb"] Dec 09 15:08:16 crc kubenswrapper[5107]: I1209 15:08:16.942062 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3fa6e0f0-2067-430e-ac21-2cd0c0118655-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffw9bb\" (UID: \"3fa6e0f0-2067-430e-ac21-2cd0c0118655\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffw9bb" Dec 09 15:08:16 crc kubenswrapper[5107]: I1209 15:08:16.942405 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3fa6e0f0-2067-430e-ac21-2cd0c0118655-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffw9bb\" (UID: \"3fa6e0f0-2067-430e-ac21-2cd0c0118655\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffw9bb" Dec 09 15:08:16 crc kubenswrapper[5107]: I1209 15:08:16.942533 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdbn6\" (UniqueName: \"kubernetes.io/projected/3fa6e0f0-2067-430e-ac21-2cd0c0118655-kube-api-access-mdbn6\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffw9bb\" (UID: \"3fa6e0f0-2067-430e-ac21-2cd0c0118655\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffw9bb" Dec 09 15:08:17 crc kubenswrapper[5107]: I1209 15:08:17.044049 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3fa6e0f0-2067-430e-ac21-2cd0c0118655-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffw9bb\" (UID: \"3fa6e0f0-2067-430e-ac21-2cd0c0118655\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffw9bb" Dec 09 15:08:17 crc kubenswrapper[5107]: I1209 15:08:17.044140 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3fa6e0f0-2067-430e-ac21-2cd0c0118655-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffw9bb\" (UID: \"3fa6e0f0-2067-430e-ac21-2cd0c0118655\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffw9bb" Dec 09 15:08:17 crc kubenswrapper[5107]: I1209 15:08:17.044168 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mdbn6\" (UniqueName: \"kubernetes.io/projected/3fa6e0f0-2067-430e-ac21-2cd0c0118655-kube-api-access-mdbn6\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffw9bb\" (UID: \"3fa6e0f0-2067-430e-ac21-2cd0c0118655\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffw9bb" Dec 09 15:08:17 crc kubenswrapper[5107]: I1209 15:08:17.044965 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3fa6e0f0-2067-430e-ac21-2cd0c0118655-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffw9bb\" (UID: \"3fa6e0f0-2067-430e-ac21-2cd0c0118655\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffw9bb" Dec 09 15:08:17 crc kubenswrapper[5107]: I1209 15:08:17.045227 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3fa6e0f0-2067-430e-ac21-2cd0c0118655-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffw9bb\" (UID: \"3fa6e0f0-2067-430e-ac21-2cd0c0118655\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffw9bb" Dec 09 15:08:17 crc kubenswrapper[5107]: I1209 15:08:17.066883 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdbn6\" (UniqueName: \"kubernetes.io/projected/3fa6e0f0-2067-430e-ac21-2cd0c0118655-kube-api-access-mdbn6\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffw9bb\" (UID: \"3fa6e0f0-2067-430e-ac21-2cd0c0118655\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffw9bb" Dec 09 15:08:17 crc kubenswrapper[5107]: I1209 15:08:17.130504 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffw9bb" Dec 09 15:08:17 crc kubenswrapper[5107]: I1209 15:08:17.294992 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqst6s"] Dec 09 15:08:17 crc kubenswrapper[5107]: I1209 15:08:17.309399 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqst6s" Dec 09 15:08:17 crc kubenswrapper[5107]: I1209 15:08:17.312896 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqst6s"] Dec 09 15:08:17 crc kubenswrapper[5107]: I1209 15:08:17.448938 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/78a20010-5dd1-4843-811f-0ed58f38b127-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqst6s\" (UID: \"78a20010-5dd1-4843-811f-0ed58f38b127\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqst6s" Dec 09 15:08:17 crc kubenswrapper[5107]: I1209 15:08:17.448997 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcgmg\" (UniqueName: \"kubernetes.io/projected/78a20010-5dd1-4843-811f-0ed58f38b127-kube-api-access-bcgmg\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqst6s\" (UID: \"78a20010-5dd1-4843-811f-0ed58f38b127\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqst6s" Dec 09 15:08:17 crc kubenswrapper[5107]: I1209 15:08:17.449060 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/78a20010-5dd1-4843-811f-0ed58f38b127-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqst6s\" (UID: \"78a20010-5dd1-4843-811f-0ed58f38b127\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqst6s" Dec 09 15:08:17 crc kubenswrapper[5107]: I1209 15:08:17.453295 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffw9bb"] Dec 09 15:08:17 crc kubenswrapper[5107]: I1209 15:08:17.549878 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/78a20010-5dd1-4843-811f-0ed58f38b127-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqst6s\" (UID: \"78a20010-5dd1-4843-811f-0ed58f38b127\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqst6s" Dec 09 15:08:17 crc kubenswrapper[5107]: I1209 15:08:17.549947 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/78a20010-5dd1-4843-811f-0ed58f38b127-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqst6s\" (UID: \"78a20010-5dd1-4843-811f-0ed58f38b127\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqst6s" Dec 09 15:08:17 crc kubenswrapper[5107]: I1209 15:08:17.549974 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bcgmg\" (UniqueName: \"kubernetes.io/projected/78a20010-5dd1-4843-811f-0ed58f38b127-kube-api-access-bcgmg\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqst6s\" (UID: \"78a20010-5dd1-4843-811f-0ed58f38b127\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqst6s" Dec 09 15:08:17 crc kubenswrapper[5107]: I1209 15:08:17.550588 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/78a20010-5dd1-4843-811f-0ed58f38b127-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqst6s\" (UID: \"78a20010-5dd1-4843-811f-0ed58f38b127\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqst6s" Dec 09 15:08:17 crc kubenswrapper[5107]: I1209 15:08:17.550804 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/78a20010-5dd1-4843-811f-0ed58f38b127-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqst6s\" (UID: \"78a20010-5dd1-4843-811f-0ed58f38b127\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqst6s" Dec 09 15:08:17 crc kubenswrapper[5107]: I1209 15:08:17.577286 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcgmg\" (UniqueName: \"kubernetes.io/projected/78a20010-5dd1-4843-811f-0ed58f38b127-kube-api-access-bcgmg\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqst6s\" (UID: \"78a20010-5dd1-4843-811f-0ed58f38b127\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqst6s" Dec 09 15:08:17 crc kubenswrapper[5107]: I1209 15:08:17.642377 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqst6s" Dec 09 15:08:17 crc kubenswrapper[5107]: I1209 15:08:17.705197 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffw9bb" event={"ID":"3fa6e0f0-2067-430e-ac21-2cd0c0118655","Type":"ContainerStarted","Data":"728778a3324d8789bdb24a56de4f69fc81827e4be2a502a0b83c5f1896fdd7b2"} Dec 09 15:08:17 crc kubenswrapper[5107]: I1209 15:08:17.705245 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffw9bb" event={"ID":"3fa6e0f0-2067-430e-ac21-2cd0c0118655","Type":"ContainerStarted","Data":"8d5b05eb2a8f717880fb7a6f2b1942f3bc3844245f570e2c3234f4bc288feb55"} Dec 09 15:08:17 crc kubenswrapper[5107]: I1209 15:08:17.871831 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqst6s"] Dec 09 15:08:17 crc kubenswrapper[5107]: W1209 15:08:17.877153 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78a20010_5dd1_4843_811f_0ed58f38b127.slice/crio-fb54d6a36ea84d14d2d339a5d5b5b59c67f74270ddafccb990cd4681aae6d60d WatchSource:0}: Error finding container fb54d6a36ea84d14d2d339a5d5b5b59c67f74270ddafccb990cd4681aae6d60d: Status 404 returned error can't find the container with id fb54d6a36ea84d14d2d339a5d5b5b59c67f74270ddafccb990cd4681aae6d60d Dec 09 15:08:18 crc kubenswrapper[5107]: I1209 15:08:18.277110 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ax4ghm"] Dec 09 15:08:18 crc kubenswrapper[5107]: I1209 15:08:18.287664 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ax4ghm" Dec 09 15:08:18 crc kubenswrapper[5107]: I1209 15:08:18.291009 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ax4ghm"] Dec 09 15:08:18 crc kubenswrapper[5107]: I1209 15:08:18.366087 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a5b18b3a-e821-4a1e-bd54-67a40ae9ba95-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ax4ghm\" (UID: \"a5b18b3a-e821-4a1e-bd54-67a40ae9ba95\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ax4ghm" Dec 09 15:08:18 crc kubenswrapper[5107]: I1209 15:08:18.366377 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp6kx\" (UniqueName: \"kubernetes.io/projected/a5b18b3a-e821-4a1e-bd54-67a40ae9ba95-kube-api-access-rp6kx\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ax4ghm\" (UID: \"a5b18b3a-e821-4a1e-bd54-67a40ae9ba95\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ax4ghm" Dec 09 15:08:18 crc kubenswrapper[5107]: I1209 15:08:18.366478 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a5b18b3a-e821-4a1e-bd54-67a40ae9ba95-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ax4ghm\" (UID: \"a5b18b3a-e821-4a1e-bd54-67a40ae9ba95\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ax4ghm" Dec 09 15:08:18 crc kubenswrapper[5107]: I1209 15:08:18.468017 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a5b18b3a-e821-4a1e-bd54-67a40ae9ba95-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ax4ghm\" (UID: \"a5b18b3a-e821-4a1e-bd54-67a40ae9ba95\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ax4ghm" Dec 09 15:08:18 crc kubenswrapper[5107]: I1209 15:08:18.468107 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rp6kx\" (UniqueName: \"kubernetes.io/projected/a5b18b3a-e821-4a1e-bd54-67a40ae9ba95-kube-api-access-rp6kx\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ax4ghm\" (UID: \"a5b18b3a-e821-4a1e-bd54-67a40ae9ba95\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ax4ghm" Dec 09 15:08:18 crc kubenswrapper[5107]: I1209 15:08:18.468155 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a5b18b3a-e821-4a1e-bd54-67a40ae9ba95-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ax4ghm\" (UID: \"a5b18b3a-e821-4a1e-bd54-67a40ae9ba95\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ax4ghm" Dec 09 15:08:18 crc kubenswrapper[5107]: I1209 15:08:18.468617 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a5b18b3a-e821-4a1e-bd54-67a40ae9ba95-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ax4ghm\" (UID: \"a5b18b3a-e821-4a1e-bd54-67a40ae9ba95\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ax4ghm" Dec 09 15:08:18 crc kubenswrapper[5107]: I1209 15:08:18.468722 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a5b18b3a-e821-4a1e-bd54-67a40ae9ba95-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ax4ghm\" (UID: \"a5b18b3a-e821-4a1e-bd54-67a40ae9ba95\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ax4ghm" Dec 09 15:08:18 crc kubenswrapper[5107]: I1209 15:08:18.490181 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rp6kx\" (UniqueName: \"kubernetes.io/projected/a5b18b3a-e821-4a1e-bd54-67a40ae9ba95-kube-api-access-rp6kx\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ax4ghm\" (UID: \"a5b18b3a-e821-4a1e-bd54-67a40ae9ba95\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ax4ghm" Dec 09 15:08:18 crc kubenswrapper[5107]: I1209 15:08:18.604306 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ax4ghm" Dec 09 15:08:18 crc kubenswrapper[5107]: I1209 15:08:18.713035 5107 generic.go:358] "Generic (PLEG): container finished" podID="78a20010-5dd1-4843-811f-0ed58f38b127" containerID="462e4efe97e98ebada75a5d546ade75451c63fc8b435aba103f9a7938febb7d6" exitCode=0 Dec 09 15:08:18 crc kubenswrapper[5107]: I1209 15:08:18.713083 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqst6s" event={"ID":"78a20010-5dd1-4843-811f-0ed58f38b127","Type":"ContainerDied","Data":"462e4efe97e98ebada75a5d546ade75451c63fc8b435aba103f9a7938febb7d6"} Dec 09 15:08:18 crc kubenswrapper[5107]: I1209 15:08:18.713118 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqst6s" event={"ID":"78a20010-5dd1-4843-811f-0ed58f38b127","Type":"ContainerStarted","Data":"fb54d6a36ea84d14d2d339a5d5b5b59c67f74270ddafccb990cd4681aae6d60d"} Dec 09 15:08:18 crc kubenswrapper[5107]: I1209 15:08:18.714693 5107 generic.go:358] "Generic (PLEG): container finished" podID="3fa6e0f0-2067-430e-ac21-2cd0c0118655" containerID="728778a3324d8789bdb24a56de4f69fc81827e4be2a502a0b83c5f1896fdd7b2" exitCode=0 Dec 09 15:08:18 crc kubenswrapper[5107]: I1209 15:08:18.714908 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffw9bb" event={"ID":"3fa6e0f0-2067-430e-ac21-2cd0c0118655","Type":"ContainerDied","Data":"728778a3324d8789bdb24a56de4f69fc81827e4be2a502a0b83c5f1896fdd7b2"} Dec 09 15:08:18 crc kubenswrapper[5107]: I1209 15:08:18.826707 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ax4ghm"] Dec 09 15:08:19 crc kubenswrapper[5107]: I1209 15:08:19.725022 5107 generic.go:358] "Generic (PLEG): container finished" podID="a5b18b3a-e821-4a1e-bd54-67a40ae9ba95" containerID="592758dc64f8a0dd97a5f8ebafe44928c1cbb44c30f919931a2bb41fbae333b0" exitCode=0 Dec 09 15:08:19 crc kubenswrapper[5107]: I1209 15:08:19.725152 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ax4ghm" event={"ID":"a5b18b3a-e821-4a1e-bd54-67a40ae9ba95","Type":"ContainerDied","Data":"592758dc64f8a0dd97a5f8ebafe44928c1cbb44c30f919931a2bb41fbae333b0"} Dec 09 15:08:19 crc kubenswrapper[5107]: I1209 15:08:19.725745 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ax4ghm" event={"ID":"a5b18b3a-e821-4a1e-bd54-67a40ae9ba95","Type":"ContainerStarted","Data":"03bb8106a35ea8f529f8fc1be4ac438ffa5518c34a517a51289a5424b735dc41"} Dec 09 15:08:20 crc kubenswrapper[5107]: I1209 15:08:20.732984 5107 generic.go:358] "Generic (PLEG): container finished" podID="78a20010-5dd1-4843-811f-0ed58f38b127" containerID="22b4a0d47614af6a00ec222f281e220c8f64b2b58d41dbe531a781ffebf638eb" exitCode=0 Dec 09 15:08:20 crc kubenswrapper[5107]: I1209 15:08:20.733094 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqst6s" event={"ID":"78a20010-5dd1-4843-811f-0ed58f38b127","Type":"ContainerDied","Data":"22b4a0d47614af6a00ec222f281e220c8f64b2b58d41dbe531a781ffebf638eb"} Dec 09 15:08:20 crc kubenswrapper[5107]: I1209 15:08:20.737225 5107 generic.go:358] "Generic (PLEG): container finished" podID="3fa6e0f0-2067-430e-ac21-2cd0c0118655" containerID="d4b1431441fbca8362bdbe29c6f9620eb30ee4ea9b8f869a68e7babe65dfc280" exitCode=0 Dec 09 15:08:20 crc kubenswrapper[5107]: I1209 15:08:20.737419 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffw9bb" event={"ID":"3fa6e0f0-2067-430e-ac21-2cd0c0118655","Type":"ContainerDied","Data":"d4b1431441fbca8362bdbe29c6f9620eb30ee4ea9b8f869a68e7babe65dfc280"} Dec 09 15:08:21 crc kubenswrapper[5107]: I1209 15:08:21.043622 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5k884"] Dec 09 15:08:21 crc kubenswrapper[5107]: I1209 15:08:21.051988 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5k884" Dec 09 15:08:21 crc kubenswrapper[5107]: I1209 15:08:21.069980 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5k884"] Dec 09 15:08:21 crc kubenswrapper[5107]: I1209 15:08:21.208401 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284-utilities\") pod \"redhat-operators-5k884\" (UID: \"b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284\") " pod="openshift-marketplace/redhat-operators-5k884" Dec 09 15:08:21 crc kubenswrapper[5107]: I1209 15:08:21.208672 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlcvn\" (UniqueName: \"kubernetes.io/projected/b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284-kube-api-access-qlcvn\") pod \"redhat-operators-5k884\" (UID: \"b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284\") " pod="openshift-marketplace/redhat-operators-5k884" Dec 09 15:08:21 crc kubenswrapper[5107]: I1209 15:08:21.208794 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284-catalog-content\") pod \"redhat-operators-5k884\" (UID: \"b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284\") " pod="openshift-marketplace/redhat-operators-5k884" Dec 09 15:08:21 crc kubenswrapper[5107]: I1209 15:08:21.310013 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qlcvn\" (UniqueName: \"kubernetes.io/projected/b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284-kube-api-access-qlcvn\") pod \"redhat-operators-5k884\" (UID: \"b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284\") " pod="openshift-marketplace/redhat-operators-5k884" Dec 09 15:08:21 crc kubenswrapper[5107]: I1209 15:08:21.310070 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284-catalog-content\") pod \"redhat-operators-5k884\" (UID: \"b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284\") " pod="openshift-marketplace/redhat-operators-5k884" Dec 09 15:08:21 crc kubenswrapper[5107]: I1209 15:08:21.310125 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284-utilities\") pod \"redhat-operators-5k884\" (UID: \"b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284\") " pod="openshift-marketplace/redhat-operators-5k884" Dec 09 15:08:21 crc kubenswrapper[5107]: I1209 15:08:21.310776 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284-utilities\") pod \"redhat-operators-5k884\" (UID: \"b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284\") " pod="openshift-marketplace/redhat-operators-5k884" Dec 09 15:08:21 crc kubenswrapper[5107]: I1209 15:08:21.311398 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284-catalog-content\") pod \"redhat-operators-5k884\" (UID: \"b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284\") " pod="openshift-marketplace/redhat-operators-5k884" Dec 09 15:08:21 crc kubenswrapper[5107]: W1209 15:08:21.322776 5107 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3fa6e0f0_2067_430e_ac21_2cd0c0118655.slice/crio-8d5b05eb2a8f717880fb7a6f2b1942f3bc3844245f570e2c3234f4bc288feb55/pids.max": open /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3fa6e0f0_2067_430e_ac21_2cd0c0118655.slice/crio-8d5b05eb2a8f717880fb7a6f2b1942f3bc3844245f570e2c3234f4bc288feb55/pids.max: no such device Dec 09 15:08:21 crc kubenswrapper[5107]: I1209 15:08:21.334850 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlcvn\" (UniqueName: \"kubernetes.io/projected/b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284-kube-api-access-qlcvn\") pod \"redhat-operators-5k884\" (UID: \"b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284\") " pod="openshift-marketplace/redhat-operators-5k884" Dec 09 15:08:21 crc kubenswrapper[5107]: I1209 15:08:21.372549 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5k884" Dec 09 15:08:21 crc kubenswrapper[5107]: E1209 15:08:21.374367 5107 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3fa6e0f0_2067_430e_ac21_2cd0c0118655.slice/crio-785bacc215f0a8c682dea1e586b236c11fb321ebe50086830691f81333132c68.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3fa6e0f0_2067_430e_ac21_2cd0c0118655.slice/crio-conmon-785bacc215f0a8c682dea1e586b236c11fb321ebe50086830691f81333132c68.scope\": RecentStats: unable to find data in memory cache]" Dec 09 15:08:21 crc kubenswrapper[5107]: I1209 15:08:21.747027 5107 generic.go:358] "Generic (PLEG): container finished" podID="78a20010-5dd1-4843-811f-0ed58f38b127" containerID="7b487318322cd74cb2333db9702c3bb8fb8f8e59b23b35af299ac5e0196b1fb8" exitCode=0 Dec 09 15:08:21 crc kubenswrapper[5107]: I1209 15:08:21.747109 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqst6s" event={"ID":"78a20010-5dd1-4843-811f-0ed58f38b127","Type":"ContainerDied","Data":"7b487318322cd74cb2333db9702c3bb8fb8f8e59b23b35af299ac5e0196b1fb8"} Dec 09 15:08:21 crc kubenswrapper[5107]: I1209 15:08:21.753000 5107 generic.go:358] "Generic (PLEG): container finished" podID="3fa6e0f0-2067-430e-ac21-2cd0c0118655" containerID="785bacc215f0a8c682dea1e586b236c11fb321ebe50086830691f81333132c68" exitCode=0 Dec 09 15:08:21 crc kubenswrapper[5107]: I1209 15:08:21.753098 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffw9bb" event={"ID":"3fa6e0f0-2067-430e-ac21-2cd0c0118655","Type":"ContainerDied","Data":"785bacc215f0a8c682dea1e586b236c11fb321ebe50086830691f81333132c68"} Dec 09 15:08:21 crc kubenswrapper[5107]: I1209 15:08:21.829920 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7zmhq"] Dec 09 15:08:21 crc kubenswrapper[5107]: I1209 15:08:21.844863 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7zmhq" Dec 09 15:08:21 crc kubenswrapper[5107]: I1209 15:08:21.849268 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7zmhq"] Dec 09 15:08:21 crc kubenswrapper[5107]: I1209 15:08:21.921810 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5k884"] Dec 09 15:08:22 crc kubenswrapper[5107]: I1209 15:08:22.021182 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsmq7\" (UniqueName: \"kubernetes.io/projected/4ba50f4e-f753-47d4-9de7-1f20b82ca936-kube-api-access-fsmq7\") pod \"certified-operators-7zmhq\" (UID: \"4ba50f4e-f753-47d4-9de7-1f20b82ca936\") " pod="openshift-marketplace/certified-operators-7zmhq" Dec 09 15:08:22 crc kubenswrapper[5107]: I1209 15:08:22.021270 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ba50f4e-f753-47d4-9de7-1f20b82ca936-utilities\") pod \"certified-operators-7zmhq\" (UID: \"4ba50f4e-f753-47d4-9de7-1f20b82ca936\") " pod="openshift-marketplace/certified-operators-7zmhq" Dec 09 15:08:22 crc kubenswrapper[5107]: I1209 15:08:22.021435 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ba50f4e-f753-47d4-9de7-1f20b82ca936-catalog-content\") pod \"certified-operators-7zmhq\" (UID: \"4ba50f4e-f753-47d4-9de7-1f20b82ca936\") " pod="openshift-marketplace/certified-operators-7zmhq" Dec 09 15:08:22 crc kubenswrapper[5107]: I1209 15:08:22.122724 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fsmq7\" (UniqueName: \"kubernetes.io/projected/4ba50f4e-f753-47d4-9de7-1f20b82ca936-kube-api-access-fsmq7\") pod \"certified-operators-7zmhq\" (UID: \"4ba50f4e-f753-47d4-9de7-1f20b82ca936\") " pod="openshift-marketplace/certified-operators-7zmhq" Dec 09 15:08:22 crc kubenswrapper[5107]: I1209 15:08:22.123127 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ba50f4e-f753-47d4-9de7-1f20b82ca936-utilities\") pod \"certified-operators-7zmhq\" (UID: \"4ba50f4e-f753-47d4-9de7-1f20b82ca936\") " pod="openshift-marketplace/certified-operators-7zmhq" Dec 09 15:08:22 crc kubenswrapper[5107]: I1209 15:08:22.123157 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ba50f4e-f753-47d4-9de7-1f20b82ca936-catalog-content\") pod \"certified-operators-7zmhq\" (UID: \"4ba50f4e-f753-47d4-9de7-1f20b82ca936\") " pod="openshift-marketplace/certified-operators-7zmhq" Dec 09 15:08:22 crc kubenswrapper[5107]: I1209 15:08:22.123626 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ba50f4e-f753-47d4-9de7-1f20b82ca936-utilities\") pod \"certified-operators-7zmhq\" (UID: \"4ba50f4e-f753-47d4-9de7-1f20b82ca936\") " pod="openshift-marketplace/certified-operators-7zmhq" Dec 09 15:08:22 crc kubenswrapper[5107]: I1209 15:08:22.123701 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ba50f4e-f753-47d4-9de7-1f20b82ca936-catalog-content\") pod \"certified-operators-7zmhq\" (UID: \"4ba50f4e-f753-47d4-9de7-1f20b82ca936\") " pod="openshift-marketplace/certified-operators-7zmhq" Dec 09 15:08:22 crc kubenswrapper[5107]: I1209 15:08:22.146430 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsmq7\" (UniqueName: \"kubernetes.io/projected/4ba50f4e-f753-47d4-9de7-1f20b82ca936-kube-api-access-fsmq7\") pod \"certified-operators-7zmhq\" (UID: \"4ba50f4e-f753-47d4-9de7-1f20b82ca936\") " pod="openshift-marketplace/certified-operators-7zmhq" Dec 09 15:08:22 crc kubenswrapper[5107]: I1209 15:08:22.166799 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7zmhq" Dec 09 15:08:22 crc kubenswrapper[5107]: I1209 15:08:22.439952 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7zmhq"] Dec 09 15:08:22 crc kubenswrapper[5107]: I1209 15:08:22.730919 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-wv566"] Dec 09 15:08:22 crc kubenswrapper[5107]: I1209 15:08:22.773517 5107 generic.go:358] "Generic (PLEG): container finished" podID="b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284" containerID="34e7da1ad2a7aac8cfcd5d842ddd2783be14b574d9309a4456b87d474e071472" exitCode=0 Dec 09 15:08:22 crc kubenswrapper[5107]: I1209 15:08:22.828936 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7zmhq" event={"ID":"4ba50f4e-f753-47d4-9de7-1f20b82ca936","Type":"ContainerStarted","Data":"a382b8067ce3ef1144b112f918d1be8a71f480226bb60e06afa997ef65d8e02e"} Dec 09 15:08:22 crc kubenswrapper[5107]: I1209 15:08:22.829038 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-wv566"] Dec 09 15:08:22 crc kubenswrapper[5107]: I1209 15:08:22.829708 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-86648f486b-wv566" Dec 09 15:08:22 crc kubenswrapper[5107]: I1209 15:08:22.835234 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Dec 09 15:08:22 crc kubenswrapper[5107]: I1209 15:08:22.835904 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Dec 09 15:08:22 crc kubenswrapper[5107]: I1209 15:08:22.836506 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-t92mb\"" Dec 09 15:08:22 crc kubenswrapper[5107]: I1209 15:08:22.840152 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5k884" event={"ID":"b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284","Type":"ContainerDied","Data":"34e7da1ad2a7aac8cfcd5d842ddd2783be14b574d9309a4456b87d474e071472"} Dec 09 15:08:22 crc kubenswrapper[5107]: I1209 15:08:22.840203 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5k884" event={"ID":"b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284","Type":"ContainerStarted","Data":"9604bc30afa35a39ab3ed657d8dc1039d10c099749639bb972418353f133cbf2"} Dec 09 15:08:22 crc kubenswrapper[5107]: I1209 15:08:22.858864 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7899b98f5f-lkzhq"] Dec 09 15:08:22 crc kubenswrapper[5107]: I1209 15:08:22.864117 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7899b98f5f-lkzhq" Dec 09 15:08:22 crc kubenswrapper[5107]: I1209 15:08:22.867739 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-zdxnc\"" Dec 09 15:08:22 crc kubenswrapper[5107]: I1209 15:08:22.868014 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Dec 09 15:08:22 crc kubenswrapper[5107]: I1209 15:08:22.875608 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7899b98f5f-qg7zf"] Dec 09 15:08:22 crc kubenswrapper[5107]: I1209 15:08:22.890771 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7899b98f5f-qg7zf" Dec 09 15:08:22 crc kubenswrapper[5107]: I1209 15:08:22.900481 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7899b98f5f-lkzhq"] Dec 09 15:08:22 crc kubenswrapper[5107]: I1209 15:08:22.910158 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7899b98f5f-qg7zf"] Dec 09 15:08:22 crc kubenswrapper[5107]: I1209 15:08:22.938322 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvtcv\" (UniqueName: \"kubernetes.io/projected/0d0b662b-c8bd-455c-95a3-a0a9cd901cac-kube-api-access-mvtcv\") pod \"obo-prometheus-operator-86648f486b-wv566\" (UID: \"0d0b662b-c8bd-455c-95a3-a0a9cd901cac\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-wv566" Dec 09 15:08:23 crc kubenswrapper[5107]: I1209 15:08:23.043854 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mvtcv\" (UniqueName: \"kubernetes.io/projected/0d0b662b-c8bd-455c-95a3-a0a9cd901cac-kube-api-access-mvtcv\") pod \"obo-prometheus-operator-86648f486b-wv566\" (UID: \"0d0b662b-c8bd-455c-95a3-a0a9cd901cac\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-wv566" Dec 09 15:08:23 crc kubenswrapper[5107]: I1209 15:08:23.043951 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/98aa4bca-b6c5-4f54-b659-bd2c1ba2cbc3-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7899b98f5f-lkzhq\" (UID: \"98aa4bca-b6c5-4f54-b659-bd2c1ba2cbc3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7899b98f5f-lkzhq" Dec 09 15:08:23 crc kubenswrapper[5107]: I1209 15:08:23.044081 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b9982e3-08cf-44bb-b67c-b337e2d8c0b8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7899b98f5f-qg7zf\" (UID: \"7b9982e3-08cf-44bb-b67c-b337e2d8c0b8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7899b98f5f-qg7zf" Dec 09 15:08:23 crc kubenswrapper[5107]: I1209 15:08:23.044133 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/98aa4bca-b6c5-4f54-b659-bd2c1ba2cbc3-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7899b98f5f-lkzhq\" (UID: \"98aa4bca-b6c5-4f54-b659-bd2c1ba2cbc3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7899b98f5f-lkzhq" Dec 09 15:08:23 crc kubenswrapper[5107]: I1209 15:08:23.044174 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b9982e3-08cf-44bb-b67c-b337e2d8c0b8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7899b98f5f-qg7zf\" (UID: \"7b9982e3-08cf-44bb-b67c-b337e2d8c0b8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7899b98f5f-qg7zf" Dec 09 15:08:23 crc kubenswrapper[5107]: I1209 15:08:23.086691 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvtcv\" (UniqueName: \"kubernetes.io/projected/0d0b662b-c8bd-455c-95a3-a0a9cd901cac-kube-api-access-mvtcv\") pod \"obo-prometheus-operator-86648f486b-wv566\" (UID: \"0d0b662b-c8bd-455c-95a3-a0a9cd901cac\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-wv566" Dec 09 15:08:23 crc kubenswrapper[5107]: I1209 15:08:23.124040 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-78c97476f4-jzz2c"] Dec 09 15:08:23 crc kubenswrapper[5107]: I1209 15:08:23.146133 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b9982e3-08cf-44bb-b67c-b337e2d8c0b8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7899b98f5f-qg7zf\" (UID: \"7b9982e3-08cf-44bb-b67c-b337e2d8c0b8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7899b98f5f-qg7zf" Dec 09 15:08:23 crc kubenswrapper[5107]: I1209 15:08:23.146203 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/98aa4bca-b6c5-4f54-b659-bd2c1ba2cbc3-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7899b98f5f-lkzhq\" (UID: \"98aa4bca-b6c5-4f54-b659-bd2c1ba2cbc3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7899b98f5f-lkzhq" Dec 09 15:08:23 crc kubenswrapper[5107]: I1209 15:08:23.146230 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b9982e3-08cf-44bb-b67c-b337e2d8c0b8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7899b98f5f-qg7zf\" (UID: \"7b9982e3-08cf-44bb-b67c-b337e2d8c0b8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7899b98f5f-qg7zf" Dec 09 15:08:23 crc kubenswrapper[5107]: I1209 15:08:23.146287 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/98aa4bca-b6c5-4f54-b659-bd2c1ba2cbc3-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7899b98f5f-lkzhq\" (UID: \"98aa4bca-b6c5-4f54-b659-bd2c1ba2cbc3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7899b98f5f-lkzhq" Dec 09 15:08:23 crc kubenswrapper[5107]: I1209 15:08:23.152977 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/98aa4bca-b6c5-4f54-b659-bd2c1ba2cbc3-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7899b98f5f-lkzhq\" (UID: \"98aa4bca-b6c5-4f54-b659-bd2c1ba2cbc3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7899b98f5f-lkzhq" Dec 09 15:08:23 crc kubenswrapper[5107]: I1209 15:08:23.158067 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b9982e3-08cf-44bb-b67c-b337e2d8c0b8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7899b98f5f-qg7zf\" (UID: \"7b9982e3-08cf-44bb-b67c-b337e2d8c0b8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7899b98f5f-qg7zf" Dec 09 15:08:23 crc kubenswrapper[5107]: I1209 15:08:23.158454 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b9982e3-08cf-44bb-b67c-b337e2d8c0b8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7899b98f5f-qg7zf\" (UID: \"7b9982e3-08cf-44bb-b67c-b337e2d8c0b8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7899b98f5f-qg7zf" Dec 09 15:08:23 crc kubenswrapper[5107]: I1209 15:08:23.160136 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/98aa4bca-b6c5-4f54-b659-bd2c1ba2cbc3-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7899b98f5f-lkzhq\" (UID: \"98aa4bca-b6c5-4f54-b659-bd2c1ba2cbc3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7899b98f5f-lkzhq" Dec 09 15:08:23 crc kubenswrapper[5107]: I1209 15:08:23.162740 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-86648f486b-wv566" Dec 09 15:08:23 crc kubenswrapper[5107]: I1209 15:08:23.162876 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-78c97476f4-jzz2c" Dec 09 15:08:23 crc kubenswrapper[5107]: I1209 15:08:23.169083 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Dec 09 15:08:23 crc kubenswrapper[5107]: I1209 15:08:23.170120 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-2ksjb\"" Dec 09 15:08:23 crc kubenswrapper[5107]: I1209 15:08:23.178004 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-78c97476f4-jzz2c"] Dec 09 15:08:23 crc kubenswrapper[5107]: I1209 15:08:23.200952 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7899b98f5f-lkzhq" Dec 09 15:08:23 crc kubenswrapper[5107]: I1209 15:08:23.234107 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7899b98f5f-qg7zf" Dec 09 15:08:23 crc kubenswrapper[5107]: I1209 15:08:23.247082 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vgvf\" (UniqueName: \"kubernetes.io/projected/90904d5c-1c0c-4f4e-a60a-99ed2f6b78d5-kube-api-access-6vgvf\") pod \"observability-operator-78c97476f4-jzz2c\" (UID: \"90904d5c-1c0c-4f4e-a60a-99ed2f6b78d5\") " pod="openshift-operators/observability-operator-78c97476f4-jzz2c" Dec 09 15:08:23 crc kubenswrapper[5107]: I1209 15:08:23.247181 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/90904d5c-1c0c-4f4e-a60a-99ed2f6b78d5-observability-operator-tls\") pod \"observability-operator-78c97476f4-jzz2c\" (UID: \"90904d5c-1c0c-4f4e-a60a-99ed2f6b78d5\") " pod="openshift-operators/observability-operator-78c97476f4-jzz2c" Dec 09 15:08:23 crc kubenswrapper[5107]: I1209 15:08:23.308247 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-jr8l5"] Dec 09 15:08:23 crc kubenswrapper[5107]: I1209 15:08:23.348830 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/90904d5c-1c0c-4f4e-a60a-99ed2f6b78d5-observability-operator-tls\") pod \"observability-operator-78c97476f4-jzz2c\" (UID: \"90904d5c-1c0c-4f4e-a60a-99ed2f6b78d5\") " pod="openshift-operators/observability-operator-78c97476f4-jzz2c" Dec 09 15:08:23 crc kubenswrapper[5107]: I1209 15:08:23.348917 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6vgvf\" (UniqueName: \"kubernetes.io/projected/90904d5c-1c0c-4f4e-a60a-99ed2f6b78d5-kube-api-access-6vgvf\") pod \"observability-operator-78c97476f4-jzz2c\" (UID: \"90904d5c-1c0c-4f4e-a60a-99ed2f6b78d5\") " pod="openshift-operators/observability-operator-78c97476f4-jzz2c" Dec 09 15:08:23 crc kubenswrapper[5107]: I1209 15:08:23.354539 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/90904d5c-1c0c-4f4e-a60a-99ed2f6b78d5-observability-operator-tls\") pod \"observability-operator-78c97476f4-jzz2c\" (UID: \"90904d5c-1c0c-4f4e-a60a-99ed2f6b78d5\") " pod="openshift-operators/observability-operator-78c97476f4-jzz2c" Dec 09 15:08:23 crc kubenswrapper[5107]: I1209 15:08:23.379695 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vgvf\" (UniqueName: \"kubernetes.io/projected/90904d5c-1c0c-4f4e-a60a-99ed2f6b78d5-kube-api-access-6vgvf\") pod \"observability-operator-78c97476f4-jzz2c\" (UID: \"90904d5c-1c0c-4f4e-a60a-99ed2f6b78d5\") " pod="openshift-operators/observability-operator-78c97476f4-jzz2c" Dec 09 15:08:23 crc kubenswrapper[5107]: I1209 15:08:23.383891 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-jr8l5"] Dec 09 15:08:23 crc kubenswrapper[5107]: I1209 15:08:23.384026 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-68bdb49cbf-jr8l5" Dec 09 15:08:23 crc kubenswrapper[5107]: I1209 15:08:23.387309 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-9rbrn\"" Dec 09 15:08:23 crc kubenswrapper[5107]: I1209 15:08:23.554940 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2z2g\" (UniqueName: \"kubernetes.io/projected/b370bae5-4dcb-4a26-8d8b-06b73aeb2c05-kube-api-access-f2z2g\") pod \"perses-operator-68bdb49cbf-jr8l5\" (UID: \"b370bae5-4dcb-4a26-8d8b-06b73aeb2c05\") " pod="openshift-operators/perses-operator-68bdb49cbf-jr8l5" Dec 09 15:08:23 crc kubenswrapper[5107]: I1209 15:08:23.555064 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/b370bae5-4dcb-4a26-8d8b-06b73aeb2c05-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-jr8l5\" (UID: \"b370bae5-4dcb-4a26-8d8b-06b73aeb2c05\") " pod="openshift-operators/perses-operator-68bdb49cbf-jr8l5" Dec 09 15:08:23 crc kubenswrapper[5107]: I1209 15:08:23.567894 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-78c97476f4-jzz2c" Dec 09 15:08:23 crc kubenswrapper[5107]: I1209 15:08:23.656310 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f2z2g\" (UniqueName: \"kubernetes.io/projected/b370bae5-4dcb-4a26-8d8b-06b73aeb2c05-kube-api-access-f2z2g\") pod \"perses-operator-68bdb49cbf-jr8l5\" (UID: \"b370bae5-4dcb-4a26-8d8b-06b73aeb2c05\") " pod="openshift-operators/perses-operator-68bdb49cbf-jr8l5" Dec 09 15:08:23 crc kubenswrapper[5107]: I1209 15:08:23.656474 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/b370bae5-4dcb-4a26-8d8b-06b73aeb2c05-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-jr8l5\" (UID: \"b370bae5-4dcb-4a26-8d8b-06b73aeb2c05\") " pod="openshift-operators/perses-operator-68bdb49cbf-jr8l5" Dec 09 15:08:23 crc kubenswrapper[5107]: I1209 15:08:23.657598 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/b370bae5-4dcb-4a26-8d8b-06b73aeb2c05-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-jr8l5\" (UID: \"b370bae5-4dcb-4a26-8d8b-06b73aeb2c05\") " pod="openshift-operators/perses-operator-68bdb49cbf-jr8l5" Dec 09 15:08:23 crc kubenswrapper[5107]: I1209 15:08:23.686085 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2z2g\" (UniqueName: \"kubernetes.io/projected/b370bae5-4dcb-4a26-8d8b-06b73aeb2c05-kube-api-access-f2z2g\") pod \"perses-operator-68bdb49cbf-jr8l5\" (UID: \"b370bae5-4dcb-4a26-8d8b-06b73aeb2c05\") " pod="openshift-operators/perses-operator-68bdb49cbf-jr8l5" Dec 09 15:08:23 crc kubenswrapper[5107]: I1209 15:08:23.747387 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-68bdb49cbf-jr8l5" Dec 09 15:08:23 crc kubenswrapper[5107]: I1209 15:08:23.784832 5107 generic.go:358] "Generic (PLEG): container finished" podID="4ba50f4e-f753-47d4-9de7-1f20b82ca936" containerID="768695741cd883f10a2f8a4081347c22de6f81ccc6f347c138293e1ae1826cad" exitCode=0 Dec 09 15:08:23 crc kubenswrapper[5107]: I1209 15:08:23.784908 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7zmhq" event={"ID":"4ba50f4e-f753-47d4-9de7-1f20b82ca936","Type":"ContainerDied","Data":"768695741cd883f10a2f8a4081347c22de6f81ccc6f347c138293e1ae1826cad"} Dec 09 15:08:25 crc kubenswrapper[5107]: I1209 15:08:25.626841 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-9mv45" Dec 09 15:08:25 crc kubenswrapper[5107]: I1209 15:08:25.726157 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-9kn5t"] Dec 09 15:08:25 crc kubenswrapper[5107]: I1209 15:08:25.750612 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffw9bb" Dec 09 15:08:25 crc kubenswrapper[5107]: I1209 15:08:25.808903 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffw9bb" event={"ID":"3fa6e0f0-2067-430e-ac21-2cd0c0118655","Type":"ContainerDied","Data":"8d5b05eb2a8f717880fb7a6f2b1942f3bc3844245f570e2c3234f4bc288feb55"} Dec 09 15:08:25 crc kubenswrapper[5107]: I1209 15:08:25.808941 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d5b05eb2a8f717880fb7a6f2b1942f3bc3844245f570e2c3234f4bc288feb55" Dec 09 15:08:25 crc kubenswrapper[5107]: I1209 15:08:25.809022 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffw9bb" Dec 09 15:08:25 crc kubenswrapper[5107]: I1209 15:08:25.894056 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3fa6e0f0-2067-430e-ac21-2cd0c0118655-util\") pod \"3fa6e0f0-2067-430e-ac21-2cd0c0118655\" (UID: \"3fa6e0f0-2067-430e-ac21-2cd0c0118655\") " Dec 09 15:08:25 crc kubenswrapper[5107]: I1209 15:08:25.894213 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdbn6\" (UniqueName: \"kubernetes.io/projected/3fa6e0f0-2067-430e-ac21-2cd0c0118655-kube-api-access-mdbn6\") pod \"3fa6e0f0-2067-430e-ac21-2cd0c0118655\" (UID: \"3fa6e0f0-2067-430e-ac21-2cd0c0118655\") " Dec 09 15:08:25 crc kubenswrapper[5107]: I1209 15:08:25.894278 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3fa6e0f0-2067-430e-ac21-2cd0c0118655-bundle\") pod \"3fa6e0f0-2067-430e-ac21-2cd0c0118655\" (UID: \"3fa6e0f0-2067-430e-ac21-2cd0c0118655\") " Dec 09 15:08:25 crc kubenswrapper[5107]: I1209 15:08:25.895598 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fa6e0f0-2067-430e-ac21-2cd0c0118655-bundle" (OuterVolumeSpecName: "bundle") pod "3fa6e0f0-2067-430e-ac21-2cd0c0118655" (UID: "3fa6e0f0-2067-430e-ac21-2cd0c0118655"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:08:25 crc kubenswrapper[5107]: I1209 15:08:25.915532 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fa6e0f0-2067-430e-ac21-2cd0c0118655-util" (OuterVolumeSpecName: "util") pod "3fa6e0f0-2067-430e-ac21-2cd0c0118655" (UID: "3fa6e0f0-2067-430e-ac21-2cd0c0118655"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:08:25 crc kubenswrapper[5107]: I1209 15:08:25.920779 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fa6e0f0-2067-430e-ac21-2cd0c0118655-kube-api-access-mdbn6" (OuterVolumeSpecName: "kube-api-access-mdbn6") pod "3fa6e0f0-2067-430e-ac21-2cd0c0118655" (UID: "3fa6e0f0-2067-430e-ac21-2cd0c0118655"). InnerVolumeSpecName "kube-api-access-mdbn6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 15:08:25 crc kubenswrapper[5107]: I1209 15:08:25.996031 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mdbn6\" (UniqueName: \"kubernetes.io/projected/3fa6e0f0-2067-430e-ac21-2cd0c0118655-kube-api-access-mdbn6\") on node \"crc\" DevicePath \"\"" Dec 09 15:08:25 crc kubenswrapper[5107]: I1209 15:08:25.996077 5107 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3fa6e0f0-2067-430e-ac21-2cd0c0118655-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 15:08:25 crc kubenswrapper[5107]: I1209 15:08:25.996089 5107 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3fa6e0f0-2067-430e-ac21-2cd0c0118655-util\") on node \"crc\" DevicePath \"\"" Dec 09 15:08:26 crc kubenswrapper[5107]: I1209 15:08:26.017327 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqst6s" Dec 09 15:08:26 crc kubenswrapper[5107]: I1209 15:08:26.097444 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/78a20010-5dd1-4843-811f-0ed58f38b127-bundle\") pod \"78a20010-5dd1-4843-811f-0ed58f38b127\" (UID: \"78a20010-5dd1-4843-811f-0ed58f38b127\") " Dec 09 15:08:26 crc kubenswrapper[5107]: I1209 15:08:26.097897 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcgmg\" (UniqueName: \"kubernetes.io/projected/78a20010-5dd1-4843-811f-0ed58f38b127-kube-api-access-bcgmg\") pod \"78a20010-5dd1-4843-811f-0ed58f38b127\" (UID: \"78a20010-5dd1-4843-811f-0ed58f38b127\") " Dec 09 15:08:26 crc kubenswrapper[5107]: I1209 15:08:26.098072 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/78a20010-5dd1-4843-811f-0ed58f38b127-util\") pod \"78a20010-5dd1-4843-811f-0ed58f38b127\" (UID: \"78a20010-5dd1-4843-811f-0ed58f38b127\") " Dec 09 15:08:26 crc kubenswrapper[5107]: I1209 15:08:26.109853 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78a20010-5dd1-4843-811f-0ed58f38b127-kube-api-access-bcgmg" (OuterVolumeSpecName: "kube-api-access-bcgmg") pod "78a20010-5dd1-4843-811f-0ed58f38b127" (UID: "78a20010-5dd1-4843-811f-0ed58f38b127"). InnerVolumeSpecName "kube-api-access-bcgmg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 15:08:26 crc kubenswrapper[5107]: I1209 15:08:26.114756 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78a20010-5dd1-4843-811f-0ed58f38b127-util" (OuterVolumeSpecName: "util") pod "78a20010-5dd1-4843-811f-0ed58f38b127" (UID: "78a20010-5dd1-4843-811f-0ed58f38b127"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:08:26 crc kubenswrapper[5107]: I1209 15:08:26.115559 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78a20010-5dd1-4843-811f-0ed58f38b127-bundle" (OuterVolumeSpecName: "bundle") pod "78a20010-5dd1-4843-811f-0ed58f38b127" (UID: "78a20010-5dd1-4843-811f-0ed58f38b127"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:08:26 crc kubenswrapper[5107]: I1209 15:08:26.200050 5107 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/78a20010-5dd1-4843-811f-0ed58f38b127-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 15:08:26 crc kubenswrapper[5107]: I1209 15:08:26.200093 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bcgmg\" (UniqueName: \"kubernetes.io/projected/78a20010-5dd1-4843-811f-0ed58f38b127-kube-api-access-bcgmg\") on node \"crc\" DevicePath \"\"" Dec 09 15:08:26 crc kubenswrapper[5107]: I1209 15:08:26.200106 5107 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/78a20010-5dd1-4843-811f-0ed58f38b127-util\") on node \"crc\" DevicePath \"\"" Dec 09 15:08:26 crc kubenswrapper[5107]: I1209 15:08:26.618402 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-78c97476f4-jzz2c"] Dec 09 15:08:26 crc kubenswrapper[5107]: W1209 15:08:26.645556 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90904d5c_1c0c_4f4e_a60a_99ed2f6b78d5.slice/crio-1603982721b0bc0fc84bd05daccbfe800369be6ee1ca2d6d4db48975fabb1b6a WatchSource:0}: Error finding container 1603982721b0bc0fc84bd05daccbfe800369be6ee1ca2d6d4db48975fabb1b6a: Status 404 returned error can't find the container with id 1603982721b0bc0fc84bd05daccbfe800369be6ee1ca2d6d4db48975fabb1b6a Dec 09 15:08:26 crc kubenswrapper[5107]: I1209 15:08:26.879091 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5k884" event={"ID":"b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284","Type":"ContainerStarted","Data":"17e06b9369664dfe9801ff2019de18aa64e5c6f55752c8d64bc35cf7ed29d85c"} Dec 09 15:08:26 crc kubenswrapper[5107]: I1209 15:08:26.884277 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqst6s" event={"ID":"78a20010-5dd1-4843-811f-0ed58f38b127","Type":"ContainerDied","Data":"fb54d6a36ea84d14d2d339a5d5b5b59c67f74270ddafccb990cd4681aae6d60d"} Dec 09 15:08:26 crc kubenswrapper[5107]: I1209 15:08:26.884354 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb54d6a36ea84d14d2d339a5d5b5b59c67f74270ddafccb990cd4681aae6d60d" Dec 09 15:08:26 crc kubenswrapper[5107]: I1209 15:08:26.884518 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eqst6s" Dec 09 15:08:26 crc kubenswrapper[5107]: I1209 15:08:26.906606 5107 generic.go:358] "Generic (PLEG): container finished" podID="a5b18b3a-e821-4a1e-bd54-67a40ae9ba95" containerID="75d81ed03326bc1321381d30b2fe081acfb8dbadcc20d9a1ece48a03662f7bc8" exitCode=0 Dec 09 15:08:26 crc kubenswrapper[5107]: I1209 15:08:26.906780 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ax4ghm" event={"ID":"a5b18b3a-e821-4a1e-bd54-67a40ae9ba95","Type":"ContainerDied","Data":"75d81ed03326bc1321381d30b2fe081acfb8dbadcc20d9a1ece48a03662f7bc8"} Dec 09 15:08:26 crc kubenswrapper[5107]: I1209 15:08:26.916662 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-78c97476f4-jzz2c" event={"ID":"90904d5c-1c0c-4f4e-a60a-99ed2f6b78d5","Type":"ContainerStarted","Data":"1603982721b0bc0fc84bd05daccbfe800369be6ee1ca2d6d4db48975fabb1b6a"} Dec 09 15:08:26 crc kubenswrapper[5107]: I1209 15:08:26.917389 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-wv566"] Dec 09 15:08:26 crc kubenswrapper[5107]: I1209 15:08:26.923722 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7zmhq" event={"ID":"4ba50f4e-f753-47d4-9de7-1f20b82ca936","Type":"ContainerStarted","Data":"df814bdd598f29698bd9b7c43d1a976691e4ce9d1e18266ccca2c4f262ec25cf"} Dec 09 15:08:26 crc kubenswrapper[5107]: I1209 15:08:26.932436 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7899b98f5f-qg7zf"] Dec 09 15:08:27 crc kubenswrapper[5107]: I1209 15:08:27.004604 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-jr8l5"] Dec 09 15:08:27 crc kubenswrapper[5107]: I1209 15:08:27.076086 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7899b98f5f-lkzhq"] Dec 09 15:08:27 crc kubenswrapper[5107]: W1209 15:08:27.149092 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d0b662b_c8bd_455c_95a3_a0a9cd901cac.slice/crio-b7b752b0a8bedb20205d1447f12997c77952246177f84de8d317627838231159 WatchSource:0}: Error finding container b7b752b0a8bedb20205d1447f12997c77952246177f84de8d317627838231159: Status 404 returned error can't find the container with id b7b752b0a8bedb20205d1447f12997c77952246177f84de8d317627838231159 Dec 09 15:08:27 crc kubenswrapper[5107]: W1209 15:08:27.151378 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98aa4bca_b6c5_4f54_b659_bd2c1ba2cbc3.slice/crio-815dcad935625554d730ec5a207e1497564deba9af589ee140feb3eccb82aa73 WatchSource:0}: Error finding container 815dcad935625554d730ec5a207e1497564deba9af589ee140feb3eccb82aa73: Status 404 returned error can't find the container with id 815dcad935625554d730ec5a207e1497564deba9af589ee140feb3eccb82aa73 Dec 09 15:08:27 crc kubenswrapper[5107]: W1209 15:08:27.153047 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb370bae5_4dcb_4a26_8d8b_06b73aeb2c05.slice/crio-e0f9aedd13ebf4d75e4245aa6efdc84658da453e5fa77e4a6aa5a7ed1054e2ff WatchSource:0}: Error finding container e0f9aedd13ebf4d75e4245aa6efdc84658da453e5fa77e4a6aa5a7ed1054e2ff: Status 404 returned error can't find the container with id e0f9aedd13ebf4d75e4245aa6efdc84658da453e5fa77e4a6aa5a7ed1054e2ff Dec 09 15:08:27 crc kubenswrapper[5107]: W1209 15:08:27.164440 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b9982e3_08cf_44bb_b67c_b337e2d8c0b8.slice/crio-fbb0e2b338070eccb06bc449d71729c48cec3a2ef8060ea7df0856338cc0de7e WatchSource:0}: Error finding container fbb0e2b338070eccb06bc449d71729c48cec3a2ef8060ea7df0856338cc0de7e: Status 404 returned error can't find the container with id fbb0e2b338070eccb06bc449d71729c48cec3a2ef8060ea7df0856338cc0de7e Dec 09 15:08:27 crc kubenswrapper[5107]: I1209 15:08:27.940530 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7899b98f5f-qg7zf" event={"ID":"7b9982e3-08cf-44bb-b67c-b337e2d8c0b8","Type":"ContainerStarted","Data":"fbb0e2b338070eccb06bc449d71729c48cec3a2ef8060ea7df0856338cc0de7e"} Dec 09 15:08:27 crc kubenswrapper[5107]: I1209 15:08:27.943163 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-68bdb49cbf-jr8l5" event={"ID":"b370bae5-4dcb-4a26-8d8b-06b73aeb2c05","Type":"ContainerStarted","Data":"e0f9aedd13ebf4d75e4245aa6efdc84658da453e5fa77e4a6aa5a7ed1054e2ff"} Dec 09 15:08:27 crc kubenswrapper[5107]: I1209 15:08:27.945020 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7899b98f5f-lkzhq" event={"ID":"98aa4bca-b6c5-4f54-b659-bd2c1ba2cbc3","Type":"ContainerStarted","Data":"815dcad935625554d730ec5a207e1497564deba9af589ee140feb3eccb82aa73"} Dec 09 15:08:27 crc kubenswrapper[5107]: I1209 15:08:27.946767 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-86648f486b-wv566" event={"ID":"0d0b662b-c8bd-455c-95a3-a0a9cd901cac","Type":"ContainerStarted","Data":"b7b752b0a8bedb20205d1447f12997c77952246177f84de8d317627838231159"} Dec 09 15:08:27 crc kubenswrapper[5107]: I1209 15:08:27.948828 5107 generic.go:358] "Generic (PLEG): container finished" podID="4ba50f4e-f753-47d4-9de7-1f20b82ca936" containerID="df814bdd598f29698bd9b7c43d1a976691e4ce9d1e18266ccca2c4f262ec25cf" exitCode=0 Dec 09 15:08:27 crc kubenswrapper[5107]: I1209 15:08:27.948933 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7zmhq" event={"ID":"4ba50f4e-f753-47d4-9de7-1f20b82ca936","Type":"ContainerDied","Data":"df814bdd598f29698bd9b7c43d1a976691e4ce9d1e18266ccca2c4f262ec25cf"} Dec 09 15:08:29 crc kubenswrapper[5107]: I1209 15:08:29.000052 5107 generic.go:358] "Generic (PLEG): container finished" podID="b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284" containerID="17e06b9369664dfe9801ff2019de18aa64e5c6f55752c8d64bc35cf7ed29d85c" exitCode=0 Dec 09 15:08:29 crc kubenswrapper[5107]: I1209 15:08:29.000290 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5k884" event={"ID":"b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284","Type":"ContainerDied","Data":"17e06b9369664dfe9801ff2019de18aa64e5c6f55752c8d64bc35cf7ed29d85c"} Dec 09 15:08:29 crc kubenswrapper[5107]: I1209 15:08:29.018863 5107 generic.go:358] "Generic (PLEG): container finished" podID="a5b18b3a-e821-4a1e-bd54-67a40ae9ba95" containerID="ad63d8fc53b2116a30eee0a0d1b1f98dfc85c139ca038821800e32dae3b9419f" exitCode=0 Dec 09 15:08:29 crc kubenswrapper[5107]: I1209 15:08:29.019046 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ax4ghm" event={"ID":"a5b18b3a-e821-4a1e-bd54-67a40ae9ba95","Type":"ContainerDied","Data":"ad63d8fc53b2116a30eee0a0d1b1f98dfc85c139ca038821800e32dae3b9419f"} Dec 09 15:08:29 crc kubenswrapper[5107]: I1209 15:08:29.022795 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7zmhq" event={"ID":"4ba50f4e-f753-47d4-9de7-1f20b82ca936","Type":"ContainerStarted","Data":"65c205154f71ed1314d122de2f9f89251a946737ab583ce73ecbef1be714b237"} Dec 09 15:08:29 crc kubenswrapper[5107]: I1209 15:08:29.089854 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7zmhq" podStartSLOduration=5.668317697 podStartE2EDuration="8.089825003s" podCreationTimestamp="2025-12-09 15:08:21 +0000 UTC" firstStartedPulling="2025-12-09 15:08:23.786320485 +0000 UTC m=+751.510025374" lastFinishedPulling="2025-12-09 15:08:26.207827791 +0000 UTC m=+753.931532680" observedRunningTime="2025-12-09 15:08:29.087798207 +0000 UTC m=+756.811503096" watchObservedRunningTime="2025-12-09 15:08:29.089825003 +0000 UTC m=+756.813529892" Dec 09 15:08:29 crc kubenswrapper[5107]: I1209 15:08:29.218546 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-q4crj"] Dec 09 15:08:29 crc kubenswrapper[5107]: I1209 15:08:29.219779 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="78a20010-5dd1-4843-811f-0ed58f38b127" containerName="pull" Dec 09 15:08:29 crc kubenswrapper[5107]: I1209 15:08:29.219909 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="78a20010-5dd1-4843-811f-0ed58f38b127" containerName="pull" Dec 09 15:08:29 crc kubenswrapper[5107]: I1209 15:08:29.219995 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="78a20010-5dd1-4843-811f-0ed58f38b127" containerName="util" Dec 09 15:08:29 crc kubenswrapper[5107]: I1209 15:08:29.220066 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="78a20010-5dd1-4843-811f-0ed58f38b127" containerName="util" Dec 09 15:08:29 crc kubenswrapper[5107]: I1209 15:08:29.220171 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="78a20010-5dd1-4843-811f-0ed58f38b127" containerName="extract" Dec 09 15:08:29 crc kubenswrapper[5107]: I1209 15:08:29.220248 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="78a20010-5dd1-4843-811f-0ed58f38b127" containerName="extract" Dec 09 15:08:29 crc kubenswrapper[5107]: I1209 15:08:29.220363 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3fa6e0f0-2067-430e-ac21-2cd0c0118655" containerName="util" Dec 09 15:08:29 crc kubenswrapper[5107]: I1209 15:08:29.220473 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fa6e0f0-2067-430e-ac21-2cd0c0118655" containerName="util" Dec 09 15:08:29 crc kubenswrapper[5107]: I1209 15:08:29.220560 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3fa6e0f0-2067-430e-ac21-2cd0c0118655" containerName="pull" Dec 09 15:08:29 crc kubenswrapper[5107]: I1209 15:08:29.220640 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fa6e0f0-2067-430e-ac21-2cd0c0118655" containerName="pull" Dec 09 15:08:29 crc kubenswrapper[5107]: I1209 15:08:29.220716 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3fa6e0f0-2067-430e-ac21-2cd0c0118655" containerName="extract" Dec 09 15:08:29 crc kubenswrapper[5107]: I1209 15:08:29.220813 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fa6e0f0-2067-430e-ac21-2cd0c0118655" containerName="extract" Dec 09 15:08:29 crc kubenswrapper[5107]: I1209 15:08:29.221052 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="78a20010-5dd1-4843-811f-0ed58f38b127" containerName="extract" Dec 09 15:08:29 crc kubenswrapper[5107]: I1209 15:08:29.221139 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="3fa6e0f0-2067-430e-ac21-2cd0c0118655" containerName="extract" Dec 09 15:08:29 crc kubenswrapper[5107]: I1209 15:08:29.227824 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-q4crj" Dec 09 15:08:29 crc kubenswrapper[5107]: I1209 15:08:29.230738 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-q4crj"] Dec 09 15:08:29 crc kubenswrapper[5107]: I1209 15:08:29.232377 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"openshift-service-ca.crt\"" Dec 09 15:08:29 crc kubenswrapper[5107]: I1209 15:08:29.233750 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"kube-root-ca.crt\"" Dec 09 15:08:29 crc kubenswrapper[5107]: I1209 15:08:29.243185 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"interconnect-operator-dockercfg-kwjsd\"" Dec 09 15:08:29 crc kubenswrapper[5107]: I1209 15:08:29.307352 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn5b7\" (UniqueName: \"kubernetes.io/projected/b2a47acc-54c4-48af-bec9-1cc11934cbcc-kube-api-access-pn5b7\") pod \"interconnect-operator-78b9bd8798-q4crj\" (UID: \"b2a47acc-54c4-48af-bec9-1cc11934cbcc\") " pod="service-telemetry/interconnect-operator-78b9bd8798-q4crj" Dec 09 15:08:29 crc kubenswrapper[5107]: I1209 15:08:29.435473 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pn5b7\" (UniqueName: \"kubernetes.io/projected/b2a47acc-54c4-48af-bec9-1cc11934cbcc-kube-api-access-pn5b7\") pod \"interconnect-operator-78b9bd8798-q4crj\" (UID: \"b2a47acc-54c4-48af-bec9-1cc11934cbcc\") " pod="service-telemetry/interconnect-operator-78b9bd8798-q4crj" Dec 09 15:08:29 crc kubenswrapper[5107]: I1209 15:08:29.510690 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn5b7\" (UniqueName: \"kubernetes.io/projected/b2a47acc-54c4-48af-bec9-1cc11934cbcc-kube-api-access-pn5b7\") pod \"interconnect-operator-78b9bd8798-q4crj\" (UID: \"b2a47acc-54c4-48af-bec9-1cc11934cbcc\") " pod="service-telemetry/interconnect-operator-78b9bd8798-q4crj" Dec 09 15:08:29 crc kubenswrapper[5107]: I1209 15:08:29.572352 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-q4crj" Dec 09 15:08:30 crc kubenswrapper[5107]: I1209 15:08:30.056546 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5k884" event={"ID":"b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284","Type":"ContainerStarted","Data":"fcd86ef3cf750f9c75e8b44e1b9ea11457de2878e4d58a0467f881f4e250c772"} Dec 09 15:08:30 crc kubenswrapper[5107]: I1209 15:08:30.092087 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5k884" podStartSLOduration=5.720273302 podStartE2EDuration="9.092071766s" podCreationTimestamp="2025-12-09 15:08:21 +0000 UTC" firstStartedPulling="2025-12-09 15:08:22.830612996 +0000 UTC m=+750.554317885" lastFinishedPulling="2025-12-09 15:08:26.20241147 +0000 UTC m=+753.926116349" observedRunningTime="2025-12-09 15:08:30.091815979 +0000 UTC m=+757.815520878" watchObservedRunningTime="2025-12-09 15:08:30.092071766 +0000 UTC m=+757.815776655" Dec 09 15:08:30 crc kubenswrapper[5107]: I1209 15:08:30.272074 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-q4crj"] Dec 09 15:08:30 crc kubenswrapper[5107]: W1209 15:08:30.308353 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2a47acc_54c4_48af_bec9_1cc11934cbcc.slice/crio-bac73fe7f298fb9e9f6a1e7e733373b21c38470d161eccbee9808086119990ff WatchSource:0}: Error finding container bac73fe7f298fb9e9f6a1e7e733373b21c38470d161eccbee9808086119990ff: Status 404 returned error can't find the container with id bac73fe7f298fb9e9f6a1e7e733373b21c38470d161eccbee9808086119990ff Dec 09 15:08:30 crc kubenswrapper[5107]: I1209 15:08:30.490551 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ax4ghm" Dec 09 15:08:30 crc kubenswrapper[5107]: I1209 15:08:30.621627 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a5b18b3a-e821-4a1e-bd54-67a40ae9ba95-bundle\") pod \"a5b18b3a-e821-4a1e-bd54-67a40ae9ba95\" (UID: \"a5b18b3a-e821-4a1e-bd54-67a40ae9ba95\") " Dec 09 15:08:30 crc kubenswrapper[5107]: I1209 15:08:30.621682 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rp6kx\" (UniqueName: \"kubernetes.io/projected/a5b18b3a-e821-4a1e-bd54-67a40ae9ba95-kube-api-access-rp6kx\") pod \"a5b18b3a-e821-4a1e-bd54-67a40ae9ba95\" (UID: \"a5b18b3a-e821-4a1e-bd54-67a40ae9ba95\") " Dec 09 15:08:30 crc kubenswrapper[5107]: I1209 15:08:30.621708 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a5b18b3a-e821-4a1e-bd54-67a40ae9ba95-util\") pod \"a5b18b3a-e821-4a1e-bd54-67a40ae9ba95\" (UID: \"a5b18b3a-e821-4a1e-bd54-67a40ae9ba95\") " Dec 09 15:08:30 crc kubenswrapper[5107]: I1209 15:08:30.622667 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5b18b3a-e821-4a1e-bd54-67a40ae9ba95-bundle" (OuterVolumeSpecName: "bundle") pod "a5b18b3a-e821-4a1e-bd54-67a40ae9ba95" (UID: "a5b18b3a-e821-4a1e-bd54-67a40ae9ba95"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:08:30 crc kubenswrapper[5107]: I1209 15:08:30.627239 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5b18b3a-e821-4a1e-bd54-67a40ae9ba95-kube-api-access-rp6kx" (OuterVolumeSpecName: "kube-api-access-rp6kx") pod "a5b18b3a-e821-4a1e-bd54-67a40ae9ba95" (UID: "a5b18b3a-e821-4a1e-bd54-67a40ae9ba95"). InnerVolumeSpecName "kube-api-access-rp6kx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 15:08:30 crc kubenswrapper[5107]: I1209 15:08:30.652804 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5b18b3a-e821-4a1e-bd54-67a40ae9ba95-util" (OuterVolumeSpecName: "util") pod "a5b18b3a-e821-4a1e-bd54-67a40ae9ba95" (UID: "a5b18b3a-e821-4a1e-bd54-67a40ae9ba95"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:08:30 crc kubenswrapper[5107]: I1209 15:08:30.723162 5107 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a5b18b3a-e821-4a1e-bd54-67a40ae9ba95-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 15:08:30 crc kubenswrapper[5107]: I1209 15:08:30.723214 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rp6kx\" (UniqueName: \"kubernetes.io/projected/a5b18b3a-e821-4a1e-bd54-67a40ae9ba95-kube-api-access-rp6kx\") on node \"crc\" DevicePath \"\"" Dec 09 15:08:30 crc kubenswrapper[5107]: I1209 15:08:30.723226 5107 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a5b18b3a-e821-4a1e-bd54-67a40ae9ba95-util\") on node \"crc\" DevicePath \"\"" Dec 09 15:08:31 crc kubenswrapper[5107]: I1209 15:08:31.093259 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ax4ghm" Dec 09 15:08:31 crc kubenswrapper[5107]: I1209 15:08:31.093986 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ax4ghm" event={"ID":"a5b18b3a-e821-4a1e-bd54-67a40ae9ba95","Type":"ContainerDied","Data":"03bb8106a35ea8f529f8fc1be4ac438ffa5518c34a517a51289a5424b735dc41"} Dec 09 15:08:31 crc kubenswrapper[5107]: I1209 15:08:31.094016 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03bb8106a35ea8f529f8fc1be4ac438ffa5518c34a517a51289a5424b735dc41" Dec 09 15:08:31 crc kubenswrapper[5107]: I1209 15:08:31.109446 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-q4crj" event={"ID":"b2a47acc-54c4-48af-bec9-1cc11934cbcc","Type":"ContainerStarted","Data":"bac73fe7f298fb9e9f6a1e7e733373b21c38470d161eccbee9808086119990ff"} Dec 09 15:08:31 crc kubenswrapper[5107]: I1209 15:08:31.373227 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-5k884" Dec 09 15:08:31 crc kubenswrapper[5107]: I1209 15:08:31.373272 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5k884" Dec 09 15:08:32 crc kubenswrapper[5107]: I1209 15:08:32.175067 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-7zmhq" Dec 09 15:08:32 crc kubenswrapper[5107]: I1209 15:08:32.177415 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7zmhq" Dec 09 15:08:32 crc kubenswrapper[5107]: I1209 15:08:32.281041 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7zmhq" Dec 09 15:08:32 crc kubenswrapper[5107]: I1209 15:08:32.362727 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-6dcc8b976-7cnps"] Dec 09 15:08:32 crc kubenswrapper[5107]: I1209 15:08:32.370705 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a5b18b3a-e821-4a1e-bd54-67a40ae9ba95" containerName="util" Dec 09 15:08:32 crc kubenswrapper[5107]: I1209 15:08:32.370736 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5b18b3a-e821-4a1e-bd54-67a40ae9ba95" containerName="util" Dec 09 15:08:32 crc kubenswrapper[5107]: I1209 15:08:32.370752 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a5b18b3a-e821-4a1e-bd54-67a40ae9ba95" containerName="extract" Dec 09 15:08:32 crc kubenswrapper[5107]: I1209 15:08:32.370758 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5b18b3a-e821-4a1e-bd54-67a40ae9ba95" containerName="extract" Dec 09 15:08:32 crc kubenswrapper[5107]: I1209 15:08:32.370788 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a5b18b3a-e821-4a1e-bd54-67a40ae9ba95" containerName="pull" Dec 09 15:08:32 crc kubenswrapper[5107]: I1209 15:08:32.370796 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5b18b3a-e821-4a1e-bd54-67a40ae9ba95" containerName="pull" Dec 09 15:08:32 crc kubenswrapper[5107]: I1209 15:08:32.371043 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="a5b18b3a-e821-4a1e-bd54-67a40ae9ba95" containerName="extract" Dec 09 15:08:32 crc kubenswrapper[5107]: I1209 15:08:32.442523 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5k884" podUID="b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284" containerName="registry-server" probeResult="failure" output=< Dec 09 15:08:32 crc kubenswrapper[5107]: timeout: failed to connect service ":50051" within 1s Dec 09 15:08:32 crc kubenswrapper[5107]: > Dec 09 15:08:33 crc kubenswrapper[5107]: I1209 15:08:33.426402 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-6dcc8b976-7cnps"] Dec 09 15:08:33 crc kubenswrapper[5107]: I1209 15:08:33.426987 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-6dcc8b976-7cnps" Dec 09 15:08:33 crc kubenswrapper[5107]: I1209 15:08:33.429738 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-service-cert\"" Dec 09 15:08:33 crc kubenswrapper[5107]: I1209 15:08:33.430994 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-dockercfg-5rfdb\"" Dec 09 15:08:33 crc kubenswrapper[5107]: I1209 15:08:33.473348 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd487e1d-f6e7-43b4-ab5c-15e80a052438-webhook-cert\") pod \"elastic-operator-6dcc8b976-7cnps\" (UID: \"bd487e1d-f6e7-43b4-ab5c-15e80a052438\") " pod="service-telemetry/elastic-operator-6dcc8b976-7cnps" Dec 09 15:08:33 crc kubenswrapper[5107]: I1209 15:08:33.473451 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd487e1d-f6e7-43b4-ab5c-15e80a052438-apiservice-cert\") pod \"elastic-operator-6dcc8b976-7cnps\" (UID: \"bd487e1d-f6e7-43b4-ab5c-15e80a052438\") " pod="service-telemetry/elastic-operator-6dcc8b976-7cnps" Dec 09 15:08:33 crc kubenswrapper[5107]: I1209 15:08:33.473487 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-556hj\" (UniqueName: \"kubernetes.io/projected/bd487e1d-f6e7-43b4-ab5c-15e80a052438-kube-api-access-556hj\") pod \"elastic-operator-6dcc8b976-7cnps\" (UID: \"bd487e1d-f6e7-43b4-ab5c-15e80a052438\") " pod="service-telemetry/elastic-operator-6dcc8b976-7cnps" Dec 09 15:08:33 crc kubenswrapper[5107]: I1209 15:08:33.497415 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7zmhq" Dec 09 15:08:33 crc kubenswrapper[5107]: I1209 15:08:33.575239 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd487e1d-f6e7-43b4-ab5c-15e80a052438-webhook-cert\") pod \"elastic-operator-6dcc8b976-7cnps\" (UID: \"bd487e1d-f6e7-43b4-ab5c-15e80a052438\") " pod="service-telemetry/elastic-operator-6dcc8b976-7cnps" Dec 09 15:08:33 crc kubenswrapper[5107]: I1209 15:08:33.575318 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd487e1d-f6e7-43b4-ab5c-15e80a052438-apiservice-cert\") pod \"elastic-operator-6dcc8b976-7cnps\" (UID: \"bd487e1d-f6e7-43b4-ab5c-15e80a052438\") " pod="service-telemetry/elastic-operator-6dcc8b976-7cnps" Dec 09 15:08:33 crc kubenswrapper[5107]: I1209 15:08:33.575375 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-556hj\" (UniqueName: \"kubernetes.io/projected/bd487e1d-f6e7-43b4-ab5c-15e80a052438-kube-api-access-556hj\") pod \"elastic-operator-6dcc8b976-7cnps\" (UID: \"bd487e1d-f6e7-43b4-ab5c-15e80a052438\") " pod="service-telemetry/elastic-operator-6dcc8b976-7cnps" Dec 09 15:08:33 crc kubenswrapper[5107]: I1209 15:08:33.597980 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd487e1d-f6e7-43b4-ab5c-15e80a052438-apiservice-cert\") pod \"elastic-operator-6dcc8b976-7cnps\" (UID: \"bd487e1d-f6e7-43b4-ab5c-15e80a052438\") " pod="service-telemetry/elastic-operator-6dcc8b976-7cnps" Dec 09 15:08:33 crc kubenswrapper[5107]: I1209 15:08:33.600948 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd487e1d-f6e7-43b4-ab5c-15e80a052438-webhook-cert\") pod \"elastic-operator-6dcc8b976-7cnps\" (UID: \"bd487e1d-f6e7-43b4-ab5c-15e80a052438\") " pod="service-telemetry/elastic-operator-6dcc8b976-7cnps" Dec 09 15:08:33 crc kubenswrapper[5107]: I1209 15:08:33.608596 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-556hj\" (UniqueName: \"kubernetes.io/projected/bd487e1d-f6e7-43b4-ab5c-15e80a052438-kube-api-access-556hj\") pod \"elastic-operator-6dcc8b976-7cnps\" (UID: \"bd487e1d-f6e7-43b4-ab5c-15e80a052438\") " pod="service-telemetry/elastic-operator-6dcc8b976-7cnps" Dec 09 15:08:33 crc kubenswrapper[5107]: I1209 15:08:33.763567 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-6dcc8b976-7cnps" Dec 09 15:08:34 crc kubenswrapper[5107]: I1209 15:08:34.574251 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7zmhq"] Dec 09 15:08:34 crc kubenswrapper[5107]: I1209 15:08:34.692009 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-6dcc8b976-7cnps"] Dec 09 15:08:35 crc kubenswrapper[5107]: I1209 15:08:35.174479 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-6dcc8b976-7cnps" event={"ID":"bd487e1d-f6e7-43b4-ab5c-15e80a052438","Type":"ContainerStarted","Data":"5e95999c0a2fc123ac283e3f4cfb689a0d2c1c7f233636626f830f3330a612eb"} Dec 09 15:08:36 crc kubenswrapper[5107]: I1209 15:08:36.186276 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7zmhq" podUID="4ba50f4e-f753-47d4-9de7-1f20b82ca936" containerName="registry-server" containerID="cri-o://65c205154f71ed1314d122de2f9f89251a946737ab583ce73ecbef1be714b237" gracePeriod=2 Dec 09 15:08:37 crc kubenswrapper[5107]: I1209 15:08:37.212789 5107 generic.go:358] "Generic (PLEG): container finished" podID="4ba50f4e-f753-47d4-9de7-1f20b82ca936" containerID="65c205154f71ed1314d122de2f9f89251a946737ab583ce73ecbef1be714b237" exitCode=0 Dec 09 15:08:37 crc kubenswrapper[5107]: I1209 15:08:37.212856 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7zmhq" event={"ID":"4ba50f4e-f753-47d4-9de7-1f20b82ca936","Type":"ContainerDied","Data":"65c205154f71ed1314d122de2f9f89251a946737ab583ce73ecbef1be714b237"} Dec 09 15:08:41 crc kubenswrapper[5107]: I1209 15:08:41.410900 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5k884" Dec 09 15:08:41 crc kubenswrapper[5107]: I1209 15:08:41.458460 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5k884" Dec 09 15:08:43 crc kubenswrapper[5107]: E1209 15:08:43.432222 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 65c205154f71ed1314d122de2f9f89251a946737ab583ce73ecbef1be714b237 is running failed: container process not found" containerID="65c205154f71ed1314d122de2f9f89251a946737ab583ce73ecbef1be714b237" cmd=["grpc_health_probe","-addr=:50051"] Dec 09 15:08:43 crc kubenswrapper[5107]: E1209 15:08:43.433106 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 65c205154f71ed1314d122de2f9f89251a946737ab583ce73ecbef1be714b237 is running failed: container process not found" containerID="65c205154f71ed1314d122de2f9f89251a946737ab583ce73ecbef1be714b237" cmd=["grpc_health_probe","-addr=:50051"] Dec 09 15:08:43 crc kubenswrapper[5107]: E1209 15:08:43.433437 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 65c205154f71ed1314d122de2f9f89251a946737ab583ce73ecbef1be714b237 is running failed: container process not found" containerID="65c205154f71ed1314d122de2f9f89251a946737ab583ce73ecbef1be714b237" cmd=["grpc_health_probe","-addr=:50051"] Dec 09 15:08:43 crc kubenswrapper[5107]: E1209 15:08:43.433504 5107 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 65c205154f71ed1314d122de2f9f89251a946737ab583ce73ecbef1be714b237 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-7zmhq" podUID="4ba50f4e-f753-47d4-9de7-1f20b82ca936" containerName="registry-server" probeResult="unknown" Dec 09 15:08:44 crc kubenswrapper[5107]: I1209 15:08:44.154359 5107 patch_prober.go:28] interesting pod/machine-config-daemon-9jq8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:08:44 crc kubenswrapper[5107]: I1209 15:08:44.155052 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" podUID="902902bc-6dc6-4c5f-8e1b-9399b7c813c7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:08:44 crc kubenswrapper[5107]: I1209 15:08:44.224119 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5k884"] Dec 09 15:08:44 crc kubenswrapper[5107]: I1209 15:08:44.226230 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5k884" podUID="b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284" containerName="registry-server" containerID="cri-o://fcd86ef3cf750f9c75e8b44e1b9ea11457de2878e4d58a0467f881f4e250c772" gracePeriod=2 Dec 09 15:08:45 crc kubenswrapper[5107]: I1209 15:08:45.269323 5107 generic.go:358] "Generic (PLEG): container finished" podID="b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284" containerID="fcd86ef3cf750f9c75e8b44e1b9ea11457de2878e4d58a0467f881f4e250c772" exitCode=0 Dec 09 15:08:45 crc kubenswrapper[5107]: I1209 15:08:45.269625 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5k884" event={"ID":"b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284","Type":"ContainerDied","Data":"fcd86ef3cf750f9c75e8b44e1b9ea11457de2878e4d58a0467f881f4e250c772"} Dec 09 15:08:46 crc kubenswrapper[5107]: I1209 15:08:46.723211 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-q2cpv"] Dec 09 15:08:46 crc kubenswrapper[5107]: I1209 15:08:46.727613 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-q2cpv" Dec 09 15:08:46 crc kubenswrapper[5107]: I1209 15:08:46.729887 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-m8jpm\"" Dec 09 15:08:46 crc kubenswrapper[5107]: I1209 15:08:46.730216 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Dec 09 15:08:46 crc kubenswrapper[5107]: I1209 15:08:46.730472 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Dec 09 15:08:46 crc kubenswrapper[5107]: I1209 15:08:46.737415 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-q2cpv"] Dec 09 15:08:46 crc kubenswrapper[5107]: I1209 15:08:46.958662 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/29f0b9c6-c235-4331-887a-d8014edf9170-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-q2cpv\" (UID: \"29f0b9c6-c235-4331-887a-d8014edf9170\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-q2cpv" Dec 09 15:08:46 crc kubenswrapper[5107]: I1209 15:08:46.959068 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2gv4\" (UniqueName: \"kubernetes.io/projected/29f0b9c6-c235-4331-887a-d8014edf9170-kube-api-access-r2gv4\") pod \"cert-manager-operator-controller-manager-64c74584c4-q2cpv\" (UID: \"29f0b9c6-c235-4331-887a-d8014edf9170\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-q2cpv" Dec 09 15:08:47 crc kubenswrapper[5107]: I1209 15:08:47.060459 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/29f0b9c6-c235-4331-887a-d8014edf9170-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-q2cpv\" (UID: \"29f0b9c6-c235-4331-887a-d8014edf9170\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-q2cpv" Dec 09 15:08:47 crc kubenswrapper[5107]: I1209 15:08:47.060580 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r2gv4\" (UniqueName: \"kubernetes.io/projected/29f0b9c6-c235-4331-887a-d8014edf9170-kube-api-access-r2gv4\") pod \"cert-manager-operator-controller-manager-64c74584c4-q2cpv\" (UID: \"29f0b9c6-c235-4331-887a-d8014edf9170\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-q2cpv" Dec 09 15:08:47 crc kubenswrapper[5107]: I1209 15:08:47.060907 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/29f0b9c6-c235-4331-887a-d8014edf9170-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-q2cpv\" (UID: \"29f0b9c6-c235-4331-887a-d8014edf9170\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-q2cpv" Dec 09 15:08:47 crc kubenswrapper[5107]: I1209 15:08:47.094565 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2gv4\" (UniqueName: \"kubernetes.io/projected/29f0b9c6-c235-4331-887a-d8014edf9170-kube-api-access-r2gv4\") pod \"cert-manager-operator-controller-manager-64c74584c4-q2cpv\" (UID: \"29f0b9c6-c235-4331-887a-d8014edf9170\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-q2cpv" Dec 09 15:08:47 crc kubenswrapper[5107]: I1209 15:08:47.350025 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-q2cpv" Dec 09 15:08:50 crc kubenswrapper[5107]: I1209 15:08:50.036627 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7zmhq" Dec 09 15:08:50 crc kubenswrapper[5107]: I1209 15:08:50.114595 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ba50f4e-f753-47d4-9de7-1f20b82ca936-catalog-content\") pod \"4ba50f4e-f753-47d4-9de7-1f20b82ca936\" (UID: \"4ba50f4e-f753-47d4-9de7-1f20b82ca936\") " Dec 09 15:08:50 crc kubenswrapper[5107]: I1209 15:08:50.114676 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsmq7\" (UniqueName: \"kubernetes.io/projected/4ba50f4e-f753-47d4-9de7-1f20b82ca936-kube-api-access-fsmq7\") pod \"4ba50f4e-f753-47d4-9de7-1f20b82ca936\" (UID: \"4ba50f4e-f753-47d4-9de7-1f20b82ca936\") " Dec 09 15:08:50 crc kubenswrapper[5107]: I1209 15:08:50.114741 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ba50f4e-f753-47d4-9de7-1f20b82ca936-utilities\") pod \"4ba50f4e-f753-47d4-9de7-1f20b82ca936\" (UID: \"4ba50f4e-f753-47d4-9de7-1f20b82ca936\") " Dec 09 15:08:50 crc kubenswrapper[5107]: I1209 15:08:50.116416 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ba50f4e-f753-47d4-9de7-1f20b82ca936-utilities" (OuterVolumeSpecName: "utilities") pod "4ba50f4e-f753-47d4-9de7-1f20b82ca936" (UID: "4ba50f4e-f753-47d4-9de7-1f20b82ca936"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:08:50 crc kubenswrapper[5107]: I1209 15:08:50.151530 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ba50f4e-f753-47d4-9de7-1f20b82ca936-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ba50f4e-f753-47d4-9de7-1f20b82ca936" (UID: "4ba50f4e-f753-47d4-9de7-1f20b82ca936"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:08:50 crc kubenswrapper[5107]: I1209 15:08:50.154397 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ba50f4e-f753-47d4-9de7-1f20b82ca936-kube-api-access-fsmq7" (OuterVolumeSpecName: "kube-api-access-fsmq7") pod "4ba50f4e-f753-47d4-9de7-1f20b82ca936" (UID: "4ba50f4e-f753-47d4-9de7-1f20b82ca936"). InnerVolumeSpecName "kube-api-access-fsmq7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 15:08:50 crc kubenswrapper[5107]: I1209 15:08:50.217099 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ba50f4e-f753-47d4-9de7-1f20b82ca936-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 15:08:50 crc kubenswrapper[5107]: I1209 15:08:50.217254 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fsmq7\" (UniqueName: \"kubernetes.io/projected/4ba50f4e-f753-47d4-9de7-1f20b82ca936-kube-api-access-fsmq7\") on node \"crc\" DevicePath \"\"" Dec 09 15:08:50 crc kubenswrapper[5107]: I1209 15:08:50.217265 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ba50f4e-f753-47d4-9de7-1f20b82ca936-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 15:08:50 crc kubenswrapper[5107]: I1209 15:08:50.324068 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7zmhq" event={"ID":"4ba50f4e-f753-47d4-9de7-1f20b82ca936","Type":"ContainerDied","Data":"a382b8067ce3ef1144b112f918d1be8a71f480226bb60e06afa997ef65d8e02e"} Dec 09 15:08:50 crc kubenswrapper[5107]: I1209 15:08:50.324131 5107 scope.go:117] "RemoveContainer" containerID="65c205154f71ed1314d122de2f9f89251a946737ab583ce73ecbef1be714b237" Dec 09 15:08:50 crc kubenswrapper[5107]: I1209 15:08:50.324509 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7zmhq" Dec 09 15:08:50 crc kubenswrapper[5107]: I1209 15:08:50.375129 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7zmhq"] Dec 09 15:08:50 crc kubenswrapper[5107]: I1209 15:08:50.375199 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7zmhq"] Dec 09 15:08:50 crc kubenswrapper[5107]: I1209 15:08:50.780078 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" podUID="baa70a71-f986-4810-8d66-a6313df5d522" containerName="registry" containerID="cri-o://b948b6def0ccd4afa1cda2751aa44b8af181311d2c053c06503f16f9856b0d4f" gracePeriod=30 Dec 09 15:08:50 crc kubenswrapper[5107]: I1209 15:08:50.844119 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ba50f4e-f753-47d4-9de7-1f20b82ca936" path="/var/lib/kubelet/pods/4ba50f4e-f753-47d4-9de7-1f20b82ca936/volumes" Dec 09 15:08:51 crc kubenswrapper[5107]: I1209 15:08:51.341497 5107 generic.go:358] "Generic (PLEG): container finished" podID="baa70a71-f986-4810-8d66-a6313df5d522" containerID="b948b6def0ccd4afa1cda2751aa44b8af181311d2c053c06503f16f9856b0d4f" exitCode=0 Dec 09 15:08:51 crc kubenswrapper[5107]: I1209 15:08:51.341606 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" event={"ID":"baa70a71-f986-4810-8d66-a6313df5d522","Type":"ContainerDied","Data":"b948b6def0ccd4afa1cda2751aa44b8af181311d2c053c06503f16f9856b0d4f"} Dec 09 15:08:51 crc kubenswrapper[5107]: E1209 15:08:51.418564 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fcd86ef3cf750f9c75e8b44e1b9ea11457de2878e4d58a0467f881f4e250c772 is running failed: container process not found" containerID="fcd86ef3cf750f9c75e8b44e1b9ea11457de2878e4d58a0467f881f4e250c772" cmd=["grpc_health_probe","-addr=:50051"] Dec 09 15:08:51 crc kubenswrapper[5107]: E1209 15:08:51.419375 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fcd86ef3cf750f9c75e8b44e1b9ea11457de2878e4d58a0467f881f4e250c772 is running failed: container process not found" containerID="fcd86ef3cf750f9c75e8b44e1b9ea11457de2878e4d58a0467f881f4e250c772" cmd=["grpc_health_probe","-addr=:50051"] Dec 09 15:08:51 crc kubenswrapper[5107]: E1209 15:08:51.419752 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fcd86ef3cf750f9c75e8b44e1b9ea11457de2878e4d58a0467f881f4e250c772 is running failed: container process not found" containerID="fcd86ef3cf750f9c75e8b44e1b9ea11457de2878e4d58a0467f881f4e250c772" cmd=["grpc_health_probe","-addr=:50051"] Dec 09 15:08:51 crc kubenswrapper[5107]: E1209 15:08:51.419787 5107 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fcd86ef3cf750f9c75e8b44e1b9ea11457de2878e4d58a0467f881f4e250c772 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-5k884" podUID="b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284" containerName="registry-server" probeResult="unknown" Dec 09 15:08:53 crc kubenswrapper[5107]: I1209 15:08:53.581503 5107 patch_prober.go:28] interesting pod/image-registry-66587d64c8-9kn5t container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.15:5000/healthz\": dial tcp 10.217.0.15:5000: connect: connection refused" start-of-body= Dec 09 15:08:53 crc kubenswrapper[5107]: I1209 15:08:53.581971 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" podUID="baa70a71-f986-4810-8d66-a6313df5d522" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.15:5000/healthz\": dial tcp 10.217.0.15:5000: connect: connection refused" Dec 09 15:08:54 crc kubenswrapper[5107]: I1209 15:08:54.584115 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5k884" Dec 09 15:08:54 crc kubenswrapper[5107]: I1209 15:08:54.701900 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284-utilities\") pod \"b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284\" (UID: \"b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284\") " Dec 09 15:08:54 crc kubenswrapper[5107]: I1209 15:08:54.702081 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlcvn\" (UniqueName: \"kubernetes.io/projected/b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284-kube-api-access-qlcvn\") pod \"b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284\" (UID: \"b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284\") " Dec 09 15:08:54 crc kubenswrapper[5107]: I1209 15:08:54.702106 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284-catalog-content\") pod \"b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284\" (UID: \"b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284\") " Dec 09 15:08:54 crc kubenswrapper[5107]: I1209 15:08:54.704688 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284-utilities" (OuterVolumeSpecName: "utilities") pod "b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284" (UID: "b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:08:54 crc kubenswrapper[5107]: I1209 15:08:54.726644 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284-kube-api-access-qlcvn" (OuterVolumeSpecName: "kube-api-access-qlcvn") pod "b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284" (UID: "b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284"). InnerVolumeSpecName "kube-api-access-qlcvn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 15:08:54 crc kubenswrapper[5107]: I1209 15:08:54.808205 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284" (UID: "b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:08:54 crc kubenswrapper[5107]: I1209 15:08:54.808634 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 15:08:54 crc kubenswrapper[5107]: I1209 15:08:54.808657 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qlcvn\" (UniqueName: \"kubernetes.io/projected/b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284-kube-api-access-qlcvn\") on node \"crc\" DevicePath \"\"" Dec 09 15:08:54 crc kubenswrapper[5107]: I1209 15:08:54.808672 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 15:08:55 crc kubenswrapper[5107]: I1209 15:08:55.366096 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5k884" event={"ID":"b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284","Type":"ContainerDied","Data":"9604bc30afa35a39ab3ed657d8dc1039d10c099749639bb972418353f133cbf2"} Dec 09 15:08:55 crc kubenswrapper[5107]: I1209 15:08:55.366187 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5k884" Dec 09 15:08:55 crc kubenswrapper[5107]: I1209 15:08:55.389085 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5k884"] Dec 09 15:08:55 crc kubenswrapper[5107]: I1209 15:08:55.393761 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5k884"] Dec 09 15:08:55 crc kubenswrapper[5107]: I1209 15:08:55.900096 5107 scope.go:117] "RemoveContainer" containerID="df814bdd598f29698bd9b7c43d1a976691e4ce9d1e18266ccca2c4f262ec25cf" Dec 09 15:08:55 crc kubenswrapper[5107]: I1209 15:08:55.939022 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 15:08:55 crc kubenswrapper[5107]: I1209 15:08:55.967574 5107 scope.go:117] "RemoveContainer" containerID="768695741cd883f10a2f8a4081347c22de6f81ccc6f347c138293e1ae1826cad" Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.012738 5107 scope.go:117] "RemoveContainer" containerID="fcd86ef3cf750f9c75e8b44e1b9ea11457de2878e4d58a0467f881f4e250c772" Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.029695 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/baa70a71-f986-4810-8d66-a6313df5d522-installation-pull-secrets\") pod \"baa70a71-f986-4810-8d66-a6313df5d522\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.030075 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"baa70a71-f986-4810-8d66-a6313df5d522\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.030116 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/baa70a71-f986-4810-8d66-a6313df5d522-registry-certificates\") pod \"baa70a71-f986-4810-8d66-a6313df5d522\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.030153 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/baa70a71-f986-4810-8d66-a6313df5d522-bound-sa-token\") pod \"baa70a71-f986-4810-8d66-a6313df5d522\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.030200 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/baa70a71-f986-4810-8d66-a6313df5d522-ca-trust-extracted\") pod \"baa70a71-f986-4810-8d66-a6313df5d522\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.030290 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmkd7\" (UniqueName: \"kubernetes.io/projected/baa70a71-f986-4810-8d66-a6313df5d522-kube-api-access-fmkd7\") pod \"baa70a71-f986-4810-8d66-a6313df5d522\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.030376 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/baa70a71-f986-4810-8d66-a6313df5d522-registry-tls\") pod \"baa70a71-f986-4810-8d66-a6313df5d522\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.030435 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/baa70a71-f986-4810-8d66-a6313df5d522-trusted-ca\") pod \"baa70a71-f986-4810-8d66-a6313df5d522\" (UID: \"baa70a71-f986-4810-8d66-a6313df5d522\") " Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.031328 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/baa70a71-f986-4810-8d66-a6313df5d522-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "baa70a71-f986-4810-8d66-a6313df5d522" (UID: "baa70a71-f986-4810-8d66-a6313df5d522"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.037653 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/baa70a71-f986-4810-8d66-a6313df5d522-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "baa70a71-f986-4810-8d66-a6313df5d522" (UID: "baa70a71-f986-4810-8d66-a6313df5d522"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.042236 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baa70a71-f986-4810-8d66-a6313df5d522-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "baa70a71-f986-4810-8d66-a6313df5d522" (UID: "baa70a71-f986-4810-8d66-a6313df5d522"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.042072 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baa70a71-f986-4810-8d66-a6313df5d522-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "baa70a71-f986-4810-8d66-a6313df5d522" (UID: "baa70a71-f986-4810-8d66-a6313df5d522"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.043483 5107 scope.go:117] "RemoveContainer" containerID="17e06b9369664dfe9801ff2019de18aa64e5c6f55752c8d64bc35cf7ed29d85c" Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.044295 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baa70a71-f986-4810-8d66-a6313df5d522-kube-api-access-fmkd7" (OuterVolumeSpecName: "kube-api-access-fmkd7") pod "baa70a71-f986-4810-8d66-a6313df5d522" (UID: "baa70a71-f986-4810-8d66-a6313df5d522"). InnerVolumeSpecName "kube-api-access-fmkd7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.046925 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baa70a71-f986-4810-8d66-a6313df5d522-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "baa70a71-f986-4810-8d66-a6313df5d522" (UID: "baa70a71-f986-4810-8d66-a6313df5d522"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.054367 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/baa70a71-f986-4810-8d66-a6313df5d522-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "baa70a71-f986-4810-8d66-a6313df5d522" (UID: "baa70a71-f986-4810-8d66-a6313df5d522"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.065476 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "baa70a71-f986-4810-8d66-a6313df5d522" (UID: "baa70a71-f986-4810-8d66-a6313df5d522"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.089999 5107 scope.go:117] "RemoveContainer" containerID="34e7da1ad2a7aac8cfcd5d842ddd2783be14b574d9309a4456b87d474e071472" Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.131714 5107 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/baa70a71-f986-4810-8d66-a6313df5d522-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.131736 5107 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/baa70a71-f986-4810-8d66-a6313df5d522-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.131744 5107 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/baa70a71-f986-4810-8d66-a6313df5d522-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.131751 5107 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/baa70a71-f986-4810-8d66-a6313df5d522-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.131761 5107 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/baa70a71-f986-4810-8d66-a6313df5d522-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.131769 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fmkd7\" (UniqueName: \"kubernetes.io/projected/baa70a71-f986-4810-8d66-a6313df5d522-kube-api-access-fmkd7\") on node \"crc\" DevicePath \"\"" Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.131777 5107 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/baa70a71-f986-4810-8d66-a6313df5d522-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.186112 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-q2cpv"] Dec 09 15:08:56 crc kubenswrapper[5107]: W1209 15:08:56.192947 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29f0b9c6_c235_4331_887a_d8014edf9170.slice/crio-e68be48fc47bd030e703cfba9a9cfab3e1a1408d9b379b699070e95b860b9316 WatchSource:0}: Error finding container e68be48fc47bd030e703cfba9a9cfab3e1a1408d9b379b699070e95b860b9316: Status 404 returned error can't find the container with id e68be48fc47bd030e703cfba9a9cfab3e1a1408d9b379b699070e95b860b9316 Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.373562 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-6dcc8b976-7cnps" event={"ID":"bd487e1d-f6e7-43b4-ab5c-15e80a052438","Type":"ContainerStarted","Data":"1932a5c0bc32a328808ea18a34d31d50a753fb2c3c7339f93341106f93002998"} Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.378935 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.379010 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-9kn5t" event={"ID":"baa70a71-f986-4810-8d66-a6313df5d522","Type":"ContainerDied","Data":"ac485abf21d0e0d8cc62de8ec8b33d7f0dd8b7f38c6299ab7de7bba85ef4098c"} Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.379046 5107 scope.go:117] "RemoveContainer" containerID="b948b6def0ccd4afa1cda2751aa44b8af181311d2c053c06503f16f9856b0d4f" Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.388152 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-q2cpv" event={"ID":"29f0b9c6-c235-4331-887a-d8014edf9170","Type":"ContainerStarted","Data":"e68be48fc47bd030e703cfba9a9cfab3e1a1408d9b379b699070e95b860b9316"} Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.389833 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7899b98f5f-qg7zf" event={"ID":"7b9982e3-08cf-44bb-b67c-b337e2d8c0b8","Type":"ContainerStarted","Data":"d100e3308e1c6e8bcecbc17ac35ce635761ac2792e02392ca3a9c2ae5f8ac1ab"} Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.392835 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-68bdb49cbf-jr8l5" event={"ID":"b370bae5-4dcb-4a26-8d8b-06b73aeb2c05","Type":"ContainerStarted","Data":"d79a1ba55885f3f29cfa37f177740f59c666da4a944c869cbdd63d7ccc3a6482"} Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.393251 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-68bdb49cbf-jr8l5" Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.395782 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-q4crj" event={"ID":"b2a47acc-54c4-48af-bec9-1cc11934cbcc","Type":"ContainerStarted","Data":"54711acb8b65b996a3ab7c2cfd4b73bfe76bf502ef6dfc39ef168d2ba5d673d1"} Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.397513 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7899b98f5f-lkzhq" event={"ID":"98aa4bca-b6c5-4f54-b659-bd2c1ba2cbc3","Type":"ContainerStarted","Data":"161a5b946f4f4bc00bef4fb49cfe0569771e5783486206a72fee8c6130a31ab8"} Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.399842 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-78c97476f4-jzz2c" event={"ID":"90904d5c-1c0c-4f4e-a60a-99ed2f6b78d5","Type":"ContainerStarted","Data":"f0c4756bbe654c59c3797b33d4eff4018a5fdae23e85170dd6f01cafd4497468"} Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.400178 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-78c97476f4-jzz2c" Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.402096 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-78c97476f4-jzz2c" Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.403800 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-86648f486b-wv566" event={"ID":"0d0b662b-c8bd-455c-95a3-a0a9cd901cac","Type":"ContainerStarted","Data":"b5bebb85f1e22b6e3959227d2d76f25f767d062c5cbc2f5a8c6b5fa839f9732d"} Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.412496 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-6dcc8b976-7cnps" podStartSLOduration=3.14958704 podStartE2EDuration="24.412480615s" podCreationTimestamp="2025-12-09 15:08:32 +0000 UTC" firstStartedPulling="2025-12-09 15:08:34.704690463 +0000 UTC m=+762.428395352" lastFinishedPulling="2025-12-09 15:08:55.967584038 +0000 UTC m=+783.691288927" observedRunningTime="2025-12-09 15:08:56.405991366 +0000 UTC m=+784.129696275" watchObservedRunningTime="2025-12-09 15:08:56.412480615 +0000 UTC m=+784.136185504" Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.454511 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-78c97476f4-jzz2c" podStartSLOduration=5.590155618 podStartE2EDuration="33.454487649s" podCreationTimestamp="2025-12-09 15:08:23 +0000 UTC" firstStartedPulling="2025-12-09 15:08:26.671297251 +0000 UTC m=+754.395002150" lastFinishedPulling="2025-12-09 15:08:54.535629292 +0000 UTC m=+782.259334181" observedRunningTime="2025-12-09 15:08:56.447884907 +0000 UTC m=+784.171589806" watchObservedRunningTime="2025-12-09 15:08:56.454487649 +0000 UTC m=+784.178192538" Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.488402 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7899b98f5f-lkzhq" podStartSLOduration=6.559216921 podStartE2EDuration="34.488380678s" podCreationTimestamp="2025-12-09 15:08:22 +0000 UTC" firstStartedPulling="2025-12-09 15:08:27.173049024 +0000 UTC m=+754.896753913" lastFinishedPulling="2025-12-09 15:08:55.102212781 +0000 UTC m=+782.825917670" observedRunningTime="2025-12-09 15:08:56.469825284 +0000 UTC m=+784.193530183" watchObservedRunningTime="2025-12-09 15:08:56.488380678 +0000 UTC m=+784.212085567" Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.508457 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-86648f486b-wv566" podStartSLOduration=7.12720299 podStartE2EDuration="34.508437585s" podCreationTimestamp="2025-12-09 15:08:22 +0000 UTC" firstStartedPulling="2025-12-09 15:08:27.173215609 +0000 UTC m=+754.896920498" lastFinishedPulling="2025-12-09 15:08:54.554450204 +0000 UTC m=+782.278155093" observedRunningTime="2025-12-09 15:08:56.502733547 +0000 UTC m=+784.226438426" watchObservedRunningTime="2025-12-09 15:08:56.508437585 +0000 UTC m=+784.232142474" Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.556139 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-68bdb49cbf-jr8l5" podStartSLOduration=6.324567041 podStartE2EDuration="33.556114306s" podCreationTimestamp="2025-12-09 15:08:23 +0000 UTC" firstStartedPulling="2025-12-09 15:08:27.322903949 +0000 UTC m=+755.046608838" lastFinishedPulling="2025-12-09 15:08:54.554451214 +0000 UTC m=+782.278156103" observedRunningTime="2025-12-09 15:08:56.550217432 +0000 UTC m=+784.273922331" watchObservedRunningTime="2025-12-09 15:08:56.556114306 +0000 UTC m=+784.279819205" Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.584276 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7899b98f5f-qg7zf" podStartSLOduration=5.815893856 podStartE2EDuration="34.584259685s" podCreationTimestamp="2025-12-09 15:08:22 +0000 UTC" firstStartedPulling="2025-12-09 15:08:27.174291249 +0000 UTC m=+754.897996138" lastFinishedPulling="2025-12-09 15:08:55.942657078 +0000 UTC m=+783.666361967" observedRunningTime="2025-12-09 15:08:56.582492067 +0000 UTC m=+784.306196966" watchObservedRunningTime="2025-12-09 15:08:56.584259685 +0000 UTC m=+784.307964574" Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.602676 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-78b9bd8798-q4crj" podStartSLOduration=1.968623274 podStartE2EDuration="27.602658025s" podCreationTimestamp="2025-12-09 15:08:29 +0000 UTC" firstStartedPulling="2025-12-09 15:08:30.334417241 +0000 UTC m=+758.058122130" lastFinishedPulling="2025-12-09 15:08:55.968452002 +0000 UTC m=+783.692156881" observedRunningTime="2025-12-09 15:08:56.602178922 +0000 UTC m=+784.325883801" watchObservedRunningTime="2025-12-09 15:08:56.602658025 +0000 UTC m=+784.326362904" Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.625409 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-9kn5t"] Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.629491 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-9kn5t"] Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.824300 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284" path="/var/lib/kubelet/pods/b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284/volumes" Dec 09 15:08:56 crc kubenswrapper[5107]: I1209 15:08:56.825467 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baa70a71-f986-4810-8d66-a6313df5d522" path="/var/lib/kubelet/pods/baa70a71-f986-4810-8d66-a6313df5d522/volumes" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.205128 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.205751 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284" containerName="registry-server" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.205768 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284" containerName="registry-server" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.205786 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4ba50f4e-f753-47d4-9de7-1f20b82ca936" containerName="extract-content" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.205793 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba50f4e-f753-47d4-9de7-1f20b82ca936" containerName="extract-content" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.205800 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4ba50f4e-f753-47d4-9de7-1f20b82ca936" containerName="registry-server" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.205806 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba50f4e-f753-47d4-9de7-1f20b82ca936" containerName="registry-server" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.205815 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="baa70a71-f986-4810-8d66-a6313df5d522" containerName="registry" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.205820 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="baa70a71-f986-4810-8d66-a6313df5d522" containerName="registry" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.205831 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4ba50f4e-f753-47d4-9de7-1f20b82ca936" containerName="extract-utilities" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.205837 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba50f4e-f753-47d4-9de7-1f20b82ca936" containerName="extract-utilities" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.205846 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284" containerName="extract-content" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.205852 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284" containerName="extract-content" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.205865 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284" containerName="extract-utilities" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.205870 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284" containerName="extract-utilities" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.205971 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="b5c6ca00-1ad4-45a5-9c02-e6b4f7a46284" containerName="registry-server" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.205986 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="baa70a71-f986-4810-8d66-a6313df5d522" containerName="registry" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.205993 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="4ba50f4e-f753-47d4-9de7-1f20b82ca936" containerName="registry-server" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.532321 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.532722 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.536515 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-http-certs-internal\"" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.536688 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-internal-users\"" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.536810 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-config\"" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.536823 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-remote-ca\"" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.537051 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-unicast-hosts\"" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.537140 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-scripts\"" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.537188 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-dockercfg-j4vsn\"" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.537254 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-xpack-file-realm\"" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.537316 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-transport-certs\"" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.652876 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.652937 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.652962 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.652988 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.653063 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.653080 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.653125 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.653142 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.653174 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.653202 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.653255 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.653275 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.653310 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.653351 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.653375 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.754457 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.754491 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.754517 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.754535 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.754560 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.754587 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.754629 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.754653 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.754682 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.754713 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.754736 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.754777 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.754794 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.754814 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.754834 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.755263 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.755278 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.755286 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.755628 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.755759 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.756092 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.757128 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.757298 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.760774 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.760810 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.761257 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.762184 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.763143 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.764108 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.788928 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:57 crc kubenswrapper[5107]: I1209 15:08:57.857766 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:08:58 crc kubenswrapper[5107]: I1209 15:08:58.147004 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 09 15:08:58 crc kubenswrapper[5107]: I1209 15:08:58.417495 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b","Type":"ContainerStarted","Data":"1045ca8d5d49b3dd7a84851da4c1cea8f8acd8204a73bc772c0208b357a88d0e"} Dec 09 15:09:07 crc kubenswrapper[5107]: I1209 15:09:07.539547 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-68bdb49cbf-jr8l5" Dec 09 15:09:14 crc kubenswrapper[5107]: I1209 15:09:14.154524 5107 patch_prober.go:28] interesting pod/machine-config-daemon-9jq8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:09:14 crc kubenswrapper[5107]: I1209 15:09:14.155181 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" podUID="902902bc-6dc6-4c5f-8e1b-9399b7c813c7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:09:20 crc kubenswrapper[5107]: I1209 15:09:20.578126 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-q2cpv" event={"ID":"29f0b9c6-c235-4331-887a-d8014edf9170","Type":"ContainerStarted","Data":"e44872aeb745f3c31bcfaee626a60570640b5f1a5cdfd5e4270484776c8d0636"} Dec 09 15:09:20 crc kubenswrapper[5107]: I1209 15:09:20.603269 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-q2cpv" podStartSLOduration=10.710928786 podStartE2EDuration="34.603233068s" podCreationTimestamp="2025-12-09 15:08:46 +0000 UTC" firstStartedPulling="2025-12-09 15:08:56.203926737 +0000 UTC m=+783.927631626" lastFinishedPulling="2025-12-09 15:09:20.096231019 +0000 UTC m=+807.819935908" observedRunningTime="2025-12-09 15:09:20.595579616 +0000 UTC m=+808.319284515" watchObservedRunningTime="2025-12-09 15:09:20.603233068 +0000 UTC m=+808.326937957" Dec 09 15:09:21 crc kubenswrapper[5107]: I1209 15:09:21.635051 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b","Type":"ContainerStarted","Data":"f0a221f81a152ef1995b17f17882a6a1e89625ada46a970e3814eaeb65989b7b"} Dec 09 15:09:21 crc kubenswrapper[5107]: I1209 15:09:21.848549 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 09 15:09:21 crc kubenswrapper[5107]: I1209 15:09:21.897416 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 09 15:09:22 crc kubenswrapper[5107]: E1209 15:09:22.270685 5107 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb48ec6ed_e8e3_4563_bfa3_ceabea8bb70b.slice/crio-f0a221f81a152ef1995b17f17882a6a1e89625ada46a970e3814eaeb65989b7b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb48ec6ed_e8e3_4563_bfa3_ceabea8bb70b.slice/crio-conmon-f0a221f81a152ef1995b17f17882a6a1e89625ada46a970e3814eaeb65989b7b.scope\": RecentStats: unable to find data in memory cache]" Dec 09 15:09:22 crc kubenswrapper[5107]: I1209 15:09:22.641064 5107 generic.go:358] "Generic (PLEG): container finished" podID="b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b" containerID="f0a221f81a152ef1995b17f17882a6a1e89625ada46a970e3814eaeb65989b7b" exitCode=0 Dec 09 15:09:22 crc kubenswrapper[5107]: I1209 15:09:22.641128 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b","Type":"ContainerDied","Data":"f0a221f81a152ef1995b17f17882a6a1e89625ada46a970e3814eaeb65989b7b"} Dec 09 15:09:23 crc kubenswrapper[5107]: I1209 15:09:23.648656 5107 generic.go:358] "Generic (PLEG): container finished" podID="b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b" containerID="4132875a3dd80ee2fbfaad18464f8bc90fe26081807ae3f355895fdbadf784cb" exitCode=0 Dec 09 15:09:23 crc kubenswrapper[5107]: I1209 15:09:23.648742 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b","Type":"ContainerDied","Data":"4132875a3dd80ee2fbfaad18464f8bc90fe26081807ae3f355895fdbadf784cb"} Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.048442 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.052278 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.054660 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-sys-config\"" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.055242 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-ca\"" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.055515 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-global-ca\"" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.056960 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-9pqzk\"" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.062651 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.157719 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/021cfe9c-c8db-460a-9dab-adf2942330e2-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.157761 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/021cfe9c-c8db-460a-9dab-adf2942330e2-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.157775 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/021cfe9c-c8db-460a-9dab-adf2942330e2-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.157869 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/021cfe9c-c8db-460a-9dab-adf2942330e2-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.157974 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/021cfe9c-c8db-460a-9dab-adf2942330e2-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.158037 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/021cfe9c-c8db-460a-9dab-adf2942330e2-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.158136 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zf8k\" (UniqueName: \"kubernetes.io/projected/021cfe9c-c8db-460a-9dab-adf2942330e2-kube-api-access-9zf8k\") pod \"service-telemetry-operator-1-build\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.158202 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/021cfe9c-c8db-460a-9dab-adf2942330e2-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.158227 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/021cfe9c-c8db-460a-9dab-adf2942330e2-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.158252 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-9pqzk-push\" (UniqueName: \"kubernetes.io/secret/021cfe9c-c8db-460a-9dab-adf2942330e2-builder-dockercfg-9pqzk-push\") pod \"service-telemetry-operator-1-build\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.158353 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/021cfe9c-c8db-460a-9dab-adf2942330e2-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.158398 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-9pqzk-pull\" (UniqueName: \"kubernetes.io/secret/021cfe9c-c8db-460a-9dab-adf2942330e2-builder-dockercfg-9pqzk-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.260115 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-9pqzk-push\" (UniqueName: \"kubernetes.io/secret/021cfe9c-c8db-460a-9dab-adf2942330e2-builder-dockercfg-9pqzk-push\") pod \"service-telemetry-operator-1-build\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.260208 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/021cfe9c-c8db-460a-9dab-adf2942330e2-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.260252 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-9pqzk-pull\" (UniqueName: \"kubernetes.io/secret/021cfe9c-c8db-460a-9dab-adf2942330e2-builder-dockercfg-9pqzk-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.260292 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/021cfe9c-c8db-460a-9dab-adf2942330e2-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.260357 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/021cfe9c-c8db-460a-9dab-adf2942330e2-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.260392 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/021cfe9c-c8db-460a-9dab-adf2942330e2-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.260445 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/021cfe9c-c8db-460a-9dab-adf2942330e2-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.260491 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/021cfe9c-c8db-460a-9dab-adf2942330e2-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.260550 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/021cfe9c-c8db-460a-9dab-adf2942330e2-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.260643 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/021cfe9c-c8db-460a-9dab-adf2942330e2-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.260811 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/021cfe9c-c8db-460a-9dab-adf2942330e2-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.260973 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/021cfe9c-c8db-460a-9dab-adf2942330e2-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.261092 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/021cfe9c-c8db-460a-9dab-adf2942330e2-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.261105 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/021cfe9c-c8db-460a-9dab-adf2942330e2-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.261122 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/021cfe9c-c8db-460a-9dab-adf2942330e2-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.261898 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/021cfe9c-c8db-460a-9dab-adf2942330e2-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.262005 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9zf8k\" (UniqueName: \"kubernetes.io/projected/021cfe9c-c8db-460a-9dab-adf2942330e2-kube-api-access-9zf8k\") pod \"service-telemetry-operator-1-build\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.262524 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/021cfe9c-c8db-460a-9dab-adf2942330e2-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.263029 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/021cfe9c-c8db-460a-9dab-adf2942330e2-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.263159 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/021cfe9c-c8db-460a-9dab-adf2942330e2-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.263306 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/021cfe9c-c8db-460a-9dab-adf2942330e2-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.273975 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-9pqzk-push\" (UniqueName: \"kubernetes.io/secret/021cfe9c-c8db-460a-9dab-adf2942330e2-builder-dockercfg-9pqzk-push\") pod \"service-telemetry-operator-1-build\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.273975 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-9pqzk-pull\" (UniqueName: \"kubernetes.io/secret/021cfe9c-c8db-460a-9dab-adf2942330e2-builder-dockercfg-9pqzk-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.279478 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zf8k\" (UniqueName: \"kubernetes.io/projected/021cfe9c-c8db-460a-9dab-adf2942330e2-kube-api-access-9zf8k\") pod \"service-telemetry-operator-1-build\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.366329 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.526814 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zcppf"] Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.543532 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zcppf"] Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.543611 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zcppf" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.566243 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g47xw\" (UniqueName: \"kubernetes.io/projected/fe81a41c-5ec3-4524-928f-c0c332366270-kube-api-access-g47xw\") pod \"community-operators-zcppf\" (UID: \"fe81a41c-5ec3-4524-928f-c0c332366270\") " pod="openshift-marketplace/community-operators-zcppf" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.566379 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe81a41c-5ec3-4524-928f-c0c332366270-catalog-content\") pod \"community-operators-zcppf\" (UID: \"fe81a41c-5ec3-4524-928f-c0c332366270\") " pod="openshift-marketplace/community-operators-zcppf" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.566404 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe81a41c-5ec3-4524-928f-c0c332366270-utilities\") pod \"community-operators-zcppf\" (UID: \"fe81a41c-5ec3-4524-928f-c0c332366270\") " pod="openshift-marketplace/community-operators-zcppf" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.658219 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b","Type":"ContainerStarted","Data":"659b16281896fcbe0338f8094c19ce12fa60e4b418e09f63323c3bc16eccb352"} Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.658274 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.668242 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g47xw\" (UniqueName: \"kubernetes.io/projected/fe81a41c-5ec3-4524-928f-c0c332366270-kube-api-access-g47xw\") pod \"community-operators-zcppf\" (UID: \"fe81a41c-5ec3-4524-928f-c0c332366270\") " pod="openshift-marketplace/community-operators-zcppf" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.668381 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe81a41c-5ec3-4524-928f-c0c332366270-catalog-content\") pod \"community-operators-zcppf\" (UID: \"fe81a41c-5ec3-4524-928f-c0c332366270\") " pod="openshift-marketplace/community-operators-zcppf" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.668411 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe81a41c-5ec3-4524-928f-c0c332366270-utilities\") pod \"community-operators-zcppf\" (UID: \"fe81a41c-5ec3-4524-928f-c0c332366270\") " pod="openshift-marketplace/community-operators-zcppf" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.668879 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe81a41c-5ec3-4524-928f-c0c332366270-catalog-content\") pod \"community-operators-zcppf\" (UID: \"fe81a41c-5ec3-4524-928f-c0c332366270\") " pod="openshift-marketplace/community-operators-zcppf" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.669023 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe81a41c-5ec3-4524-928f-c0c332366270-utilities\") pod \"community-operators-zcppf\" (UID: \"fe81a41c-5ec3-4524-928f-c0c332366270\") " pod="openshift-marketplace/community-operators-zcppf" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.691764 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g47xw\" (UniqueName: \"kubernetes.io/projected/fe81a41c-5ec3-4524-928f-c0c332366270-kube-api-access-g47xw\") pod \"community-operators-zcppf\" (UID: \"fe81a41c-5ec3-4524-928f-c0c332366270\") " pod="openshift-marketplace/community-operators-zcppf" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.692706 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=5.34123774 podStartE2EDuration="27.692688249s" podCreationTimestamp="2025-12-09 15:08:57 +0000 UTC" firstStartedPulling="2025-12-09 15:08:58.168755008 +0000 UTC m=+785.892459897" lastFinishedPulling="2025-12-09 15:09:20.520205517 +0000 UTC m=+808.243910406" observedRunningTime="2025-12-09 15:09:24.690887949 +0000 UTC m=+812.414592848" watchObservedRunningTime="2025-12-09 15:09:24.692688249 +0000 UTC m=+812.416393138" Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.840437 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 09 15:09:24 crc kubenswrapper[5107]: I1209 15:09:24.859635 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zcppf" Dec 09 15:09:25 crc kubenswrapper[5107]: I1209 15:09:25.124785 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zcppf"] Dec 09 15:09:25 crc kubenswrapper[5107]: I1209 15:09:25.666945 5107 generic.go:358] "Generic (PLEG): container finished" podID="fe81a41c-5ec3-4524-928f-c0c332366270" containerID="cec7374c824e6afe8ccf5da1f8084eff0ccd14827f28bc7af71ecba9324b0893" exitCode=0 Dec 09 15:09:25 crc kubenswrapper[5107]: I1209 15:09:25.667074 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zcppf" event={"ID":"fe81a41c-5ec3-4524-928f-c0c332366270","Type":"ContainerDied","Data":"cec7374c824e6afe8ccf5da1f8084eff0ccd14827f28bc7af71ecba9324b0893"} Dec 09 15:09:25 crc kubenswrapper[5107]: I1209 15:09:25.667442 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zcppf" event={"ID":"fe81a41c-5ec3-4524-928f-c0c332366270","Type":"ContainerStarted","Data":"3bcd7d0592bf18c03ab491ff9c9e9ee9a8c09074398506d9ea1f037dedefc7dd"} Dec 09 15:09:25 crc kubenswrapper[5107]: I1209 15:09:25.669310 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"021cfe9c-c8db-460a-9dab-adf2942330e2","Type":"ContainerStarted","Data":"7d137bc3b1ebe9c44e5c540448720b90f4706497f31a77e2372ca38e9cf979aa"} Dec 09 15:09:25 crc kubenswrapper[5107]: I1209 15:09:25.792273 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-vg94l"] Dec 09 15:09:25 crc kubenswrapper[5107]: I1209 15:09:25.809299 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-vg94l" Dec 09 15:09:25 crc kubenswrapper[5107]: I1209 15:09:25.812175 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-vg94l"] Dec 09 15:09:25 crc kubenswrapper[5107]: I1209 15:09:25.812934 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Dec 09 15:09:25 crc kubenswrapper[5107]: I1209 15:09:25.813217 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Dec 09 15:09:25 crc kubenswrapper[5107]: I1209 15:09:25.813403 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-vwzh4\"" Dec 09 15:09:25 crc kubenswrapper[5107]: I1209 15:09:25.890727 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/88caaea2-7e01-446f-be64-427e01faec3b-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-vg94l\" (UID: \"88caaea2-7e01-446f-be64-427e01faec3b\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-vg94l" Dec 09 15:09:25 crc kubenswrapper[5107]: I1209 15:09:25.890883 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59r85\" (UniqueName: \"kubernetes.io/projected/88caaea2-7e01-446f-be64-427e01faec3b-kube-api-access-59r85\") pod \"cert-manager-cainjector-7dbf76d5c8-vg94l\" (UID: \"88caaea2-7e01-446f-be64-427e01faec3b\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-vg94l" Dec 09 15:09:25 crc kubenswrapper[5107]: I1209 15:09:25.992737 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-59r85\" (UniqueName: \"kubernetes.io/projected/88caaea2-7e01-446f-be64-427e01faec3b-kube-api-access-59r85\") pod \"cert-manager-cainjector-7dbf76d5c8-vg94l\" (UID: \"88caaea2-7e01-446f-be64-427e01faec3b\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-vg94l" Dec 09 15:09:25 crc kubenswrapper[5107]: I1209 15:09:25.992828 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/88caaea2-7e01-446f-be64-427e01faec3b-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-vg94l\" (UID: \"88caaea2-7e01-446f-be64-427e01faec3b\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-vg94l" Dec 09 15:09:26 crc kubenswrapper[5107]: I1209 15:09:26.014039 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-59r85\" (UniqueName: \"kubernetes.io/projected/88caaea2-7e01-446f-be64-427e01faec3b-kube-api-access-59r85\") pod \"cert-manager-cainjector-7dbf76d5c8-vg94l\" (UID: \"88caaea2-7e01-446f-be64-427e01faec3b\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-vg94l" Dec 09 15:09:26 crc kubenswrapper[5107]: I1209 15:09:26.019407 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/88caaea2-7e01-446f-be64-427e01faec3b-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-vg94l\" (UID: \"88caaea2-7e01-446f-be64-427e01faec3b\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-vg94l" Dec 09 15:09:26 crc kubenswrapper[5107]: I1209 15:09:26.160901 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-vg94l" Dec 09 15:09:26 crc kubenswrapper[5107]: I1209 15:09:26.414622 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-vg94l"] Dec 09 15:09:26 crc kubenswrapper[5107]: W1209 15:09:26.428359 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88caaea2_7e01_446f_be64_427e01faec3b.slice/crio-cb62678488b61585a1250609e048adf8195f0a93c34207e6c00d376acf6141e8 WatchSource:0}: Error finding container cb62678488b61585a1250609e048adf8195f0a93c34207e6c00d376acf6141e8: Status 404 returned error can't find the container with id cb62678488b61585a1250609e048adf8195f0a93c34207e6c00d376acf6141e8 Dec 09 15:09:26 crc kubenswrapper[5107]: I1209 15:09:26.697468 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-vg94l" event={"ID":"88caaea2-7e01-446f-be64-427e01faec3b","Type":"ContainerStarted","Data":"cb62678488b61585a1250609e048adf8195f0a93c34207e6c00d376acf6141e8"} Dec 09 15:09:27 crc kubenswrapper[5107]: I1209 15:09:27.203687 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-s8xvq"] Dec 09 15:09:27 crc kubenswrapper[5107]: I1209 15:09:27.213525 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-s8xvq"] Dec 09 15:09:27 crc kubenswrapper[5107]: I1209 15:09:27.213664 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-s8xvq" Dec 09 15:09:27 crc kubenswrapper[5107]: I1209 15:09:27.215920 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-shvcp\"" Dec 09 15:09:27 crc kubenswrapper[5107]: I1209 15:09:27.385000 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1715f827-212a-4f32-bcb1-f28f027ea3e8-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-s8xvq\" (UID: \"1715f827-212a-4f32-bcb1-f28f027ea3e8\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-s8xvq" Dec 09 15:09:27 crc kubenswrapper[5107]: I1209 15:09:27.385385 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh4mb\" (UniqueName: \"kubernetes.io/projected/1715f827-212a-4f32-bcb1-f28f027ea3e8-kube-api-access-fh4mb\") pod \"cert-manager-webhook-7894b5b9b4-s8xvq\" (UID: \"1715f827-212a-4f32-bcb1-f28f027ea3e8\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-s8xvq" Dec 09 15:09:27 crc kubenswrapper[5107]: I1209 15:09:27.486656 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1715f827-212a-4f32-bcb1-f28f027ea3e8-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-s8xvq\" (UID: \"1715f827-212a-4f32-bcb1-f28f027ea3e8\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-s8xvq" Dec 09 15:09:27 crc kubenswrapper[5107]: I1209 15:09:27.486705 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fh4mb\" (UniqueName: \"kubernetes.io/projected/1715f827-212a-4f32-bcb1-f28f027ea3e8-kube-api-access-fh4mb\") pod \"cert-manager-webhook-7894b5b9b4-s8xvq\" (UID: \"1715f827-212a-4f32-bcb1-f28f027ea3e8\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-s8xvq" Dec 09 15:09:27 crc kubenswrapper[5107]: I1209 15:09:27.510313 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1715f827-212a-4f32-bcb1-f28f027ea3e8-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-s8xvq\" (UID: \"1715f827-212a-4f32-bcb1-f28f027ea3e8\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-s8xvq" Dec 09 15:09:27 crc kubenswrapper[5107]: I1209 15:09:27.520081 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh4mb\" (UniqueName: \"kubernetes.io/projected/1715f827-212a-4f32-bcb1-f28f027ea3e8-kube-api-access-fh4mb\") pod \"cert-manager-webhook-7894b5b9b4-s8xvq\" (UID: \"1715f827-212a-4f32-bcb1-f28f027ea3e8\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-s8xvq" Dec 09 15:09:27 crc kubenswrapper[5107]: I1209 15:09:27.599582 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-s8xvq" Dec 09 15:09:27 crc kubenswrapper[5107]: I1209 15:09:27.708514 5107 generic.go:358] "Generic (PLEG): container finished" podID="fe81a41c-5ec3-4524-928f-c0c332366270" containerID="74cd12b07640a329285224f6438c325aa0351320957847fad89b7addaad2001a" exitCode=0 Dec 09 15:09:27 crc kubenswrapper[5107]: I1209 15:09:27.708763 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zcppf" event={"ID":"fe81a41c-5ec3-4524-928f-c0c332366270","Type":"ContainerDied","Data":"74cd12b07640a329285224f6438c325aa0351320957847fad89b7addaad2001a"} Dec 09 15:09:34 crc kubenswrapper[5107]: I1209 15:09:34.506160 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 09 15:09:35 crc kubenswrapper[5107]: I1209 15:09:35.783965 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b" containerName="elasticsearch" probeResult="failure" output=< Dec 09 15:09:35 crc kubenswrapper[5107]: {"timestamp": "2025-12-09T15:09:35+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 09 15:09:35 crc kubenswrapper[5107]: > Dec 09 15:09:36 crc kubenswrapper[5107]: I1209 15:09:36.166240 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 09 15:09:36 crc kubenswrapper[5107]: I1209 15:09:36.858631 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 09 15:09:36 crc kubenswrapper[5107]: I1209 15:09:36.858996 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 15:09:36 crc kubenswrapper[5107]: I1209 15:09:36.861555 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-ca\"" Dec 09 15:09:36 crc kubenswrapper[5107]: I1209 15:09:36.861931 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-sys-config\"" Dec 09 15:09:36 crc kubenswrapper[5107]: I1209 15:09:36.862807 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-global-ca\"" Dec 09 15:09:36 crc kubenswrapper[5107]: I1209 15:09:36.918414 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/36d11c80-886f-4fa1-bfd4-2f94719344e3-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 15:09:36 crc kubenswrapper[5107]: I1209 15:09:36.918811 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-9pqzk-push\" (UniqueName: \"kubernetes.io/secret/36d11c80-886f-4fa1-bfd4-2f94719344e3-builder-dockercfg-9pqzk-push\") pod \"service-telemetry-operator-2-build\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 15:09:36 crc kubenswrapper[5107]: I1209 15:09:36.918840 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-9pqzk-pull\" (UniqueName: \"kubernetes.io/secret/36d11c80-886f-4fa1-bfd4-2f94719344e3-builder-dockercfg-9pqzk-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 15:09:36 crc kubenswrapper[5107]: I1209 15:09:36.919108 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/36d11c80-886f-4fa1-bfd4-2f94719344e3-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 15:09:36 crc kubenswrapper[5107]: I1209 15:09:36.919265 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/36d11c80-886f-4fa1-bfd4-2f94719344e3-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 15:09:36 crc kubenswrapper[5107]: I1209 15:09:36.919312 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkwzr\" (UniqueName: \"kubernetes.io/projected/36d11c80-886f-4fa1-bfd4-2f94719344e3-kube-api-access-dkwzr\") pod \"service-telemetry-operator-2-build\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 15:09:36 crc kubenswrapper[5107]: I1209 15:09:36.919435 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/36d11c80-886f-4fa1-bfd4-2f94719344e3-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 15:09:36 crc kubenswrapper[5107]: I1209 15:09:36.919477 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/36d11c80-886f-4fa1-bfd4-2f94719344e3-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 15:09:36 crc kubenswrapper[5107]: I1209 15:09:36.919591 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/36d11c80-886f-4fa1-bfd4-2f94719344e3-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 15:09:36 crc kubenswrapper[5107]: I1209 15:09:36.919643 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/36d11c80-886f-4fa1-bfd4-2f94719344e3-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 15:09:36 crc kubenswrapper[5107]: I1209 15:09:36.919800 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/36d11c80-886f-4fa1-bfd4-2f94719344e3-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 15:09:36 crc kubenswrapper[5107]: I1209 15:09:36.920033 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/36d11c80-886f-4fa1-bfd4-2f94719344e3-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 15:09:37 crc kubenswrapper[5107]: I1209 15:09:37.021357 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/36d11c80-886f-4fa1-bfd4-2f94719344e3-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 15:09:37 crc kubenswrapper[5107]: I1209 15:09:37.021413 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dkwzr\" (UniqueName: \"kubernetes.io/projected/36d11c80-886f-4fa1-bfd4-2f94719344e3-kube-api-access-dkwzr\") pod \"service-telemetry-operator-2-build\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 15:09:37 crc kubenswrapper[5107]: I1209 15:09:37.021438 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/36d11c80-886f-4fa1-bfd4-2f94719344e3-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 15:09:37 crc kubenswrapper[5107]: I1209 15:09:37.021457 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/36d11c80-886f-4fa1-bfd4-2f94719344e3-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 15:09:37 crc kubenswrapper[5107]: I1209 15:09:37.021476 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/36d11c80-886f-4fa1-bfd4-2f94719344e3-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 15:09:37 crc kubenswrapper[5107]: I1209 15:09:37.021490 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/36d11c80-886f-4fa1-bfd4-2f94719344e3-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 15:09:37 crc kubenswrapper[5107]: I1209 15:09:37.021510 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/36d11c80-886f-4fa1-bfd4-2f94719344e3-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 15:09:37 crc kubenswrapper[5107]: I1209 15:09:37.021537 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/36d11c80-886f-4fa1-bfd4-2f94719344e3-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 15:09:37 crc kubenswrapper[5107]: I1209 15:09:37.021568 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/36d11c80-886f-4fa1-bfd4-2f94719344e3-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 15:09:37 crc kubenswrapper[5107]: I1209 15:09:37.021589 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-9pqzk-push\" (UniqueName: \"kubernetes.io/secret/36d11c80-886f-4fa1-bfd4-2f94719344e3-builder-dockercfg-9pqzk-push\") pod \"service-telemetry-operator-2-build\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 15:09:37 crc kubenswrapper[5107]: I1209 15:09:37.021604 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-9pqzk-pull\" (UniqueName: \"kubernetes.io/secret/36d11c80-886f-4fa1-bfd4-2f94719344e3-builder-dockercfg-9pqzk-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 15:09:37 crc kubenswrapper[5107]: I1209 15:09:37.021638 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/36d11c80-886f-4fa1-bfd4-2f94719344e3-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 15:09:37 crc kubenswrapper[5107]: I1209 15:09:37.021734 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/36d11c80-886f-4fa1-bfd4-2f94719344e3-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 15:09:37 crc kubenswrapper[5107]: I1209 15:09:37.021954 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/36d11c80-886f-4fa1-bfd4-2f94719344e3-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 15:09:37 crc kubenswrapper[5107]: I1209 15:09:37.022131 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/36d11c80-886f-4fa1-bfd4-2f94719344e3-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 15:09:37 crc kubenswrapper[5107]: I1209 15:09:37.022129 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/36d11c80-886f-4fa1-bfd4-2f94719344e3-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 15:09:37 crc kubenswrapper[5107]: I1209 15:09:37.022221 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/36d11c80-886f-4fa1-bfd4-2f94719344e3-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 15:09:37 crc kubenswrapper[5107]: I1209 15:09:37.022418 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/36d11c80-886f-4fa1-bfd4-2f94719344e3-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 15:09:37 crc kubenswrapper[5107]: I1209 15:09:37.022646 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/36d11c80-886f-4fa1-bfd4-2f94719344e3-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 15:09:37 crc kubenswrapper[5107]: I1209 15:09:37.021906 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/36d11c80-886f-4fa1-bfd4-2f94719344e3-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 15:09:37 crc kubenswrapper[5107]: I1209 15:09:37.024065 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/36d11c80-886f-4fa1-bfd4-2f94719344e3-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 15:09:37 crc kubenswrapper[5107]: I1209 15:09:37.027713 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-9pqzk-push\" (UniqueName: \"kubernetes.io/secret/36d11c80-886f-4fa1-bfd4-2f94719344e3-builder-dockercfg-9pqzk-push\") pod \"service-telemetry-operator-2-build\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 15:09:37 crc kubenswrapper[5107]: I1209 15:09:37.037656 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-9pqzk-pull\" (UniqueName: \"kubernetes.io/secret/36d11c80-886f-4fa1-bfd4-2f94719344e3-builder-dockercfg-9pqzk-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 15:09:37 crc kubenswrapper[5107]: I1209 15:09:37.056610 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkwzr\" (UniqueName: \"kubernetes.io/projected/36d11c80-886f-4fa1-bfd4-2f94719344e3-kube-api-access-dkwzr\") pod \"service-telemetry-operator-2-build\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 15:09:37 crc kubenswrapper[5107]: I1209 15:09:37.183536 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 15:09:40 crc kubenswrapper[5107]: I1209 15:09:40.328775 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-s8xvq"] Dec 09 15:09:40 crc kubenswrapper[5107]: I1209 15:09:40.560934 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 09 15:09:40 crc kubenswrapper[5107]: I1209 15:09:40.906687 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-vg94l" event={"ID":"88caaea2-7e01-446f-be64-427e01faec3b","Type":"ContainerStarted","Data":"b8adc84b30625a1e1ae397dc8aabd306dbe2b59c9f40bb99fc3b326b26aa9646"} Dec 09 15:09:40 crc kubenswrapper[5107]: I1209 15:09:40.908491 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="b48ec6ed-e8e3-4563-bfa3-ceabea8bb70b" containerName="elasticsearch" probeResult="failure" output=< Dec 09 15:09:40 crc kubenswrapper[5107]: {"timestamp": "2025-12-09T15:09:40+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 09 15:09:40 crc kubenswrapper[5107]: > Dec 09 15:09:40 crc kubenswrapper[5107]: I1209 15:09:40.916962 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-s8xvq" event={"ID":"1715f827-212a-4f32-bcb1-f28f027ea3e8","Type":"ContainerStarted","Data":"0257e7c4417e237a9f4ce75d2bac76faf14e7428120d63232efb2a1f3e6a660c"} Dec 09 15:09:40 crc kubenswrapper[5107]: I1209 15:09:40.917013 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-s8xvq" event={"ID":"1715f827-212a-4f32-bcb1-f28f027ea3e8","Type":"ContainerStarted","Data":"87eaa3a13748421f3816d724deef3c88d7d031313f67494ca1ee39e4fe0d58e1"} Dec 09 15:09:40 crc kubenswrapper[5107]: I1209 15:09:40.917676 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-s8xvq" Dec 09 15:09:40 crc kubenswrapper[5107]: I1209 15:09:40.933828 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"36d11c80-886f-4fa1-bfd4-2f94719344e3","Type":"ContainerStarted","Data":"faf46227ae306d7772223c963bf47b66cdcdffe02232c615e51e9a7d294fe7f0"} Dec 09 15:09:40 crc kubenswrapper[5107]: I1209 15:09:40.948992 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zcppf" event={"ID":"fe81a41c-5ec3-4524-928f-c0c332366270","Type":"ContainerStarted","Data":"5753911fe9412e304d1c748c12f0672842fc78be2c6754be0d075a9b8a238204"} Dec 09 15:09:40 crc kubenswrapper[5107]: I1209 15:09:40.985145 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-vg94l" podStartSLOduration=2.191404834 podStartE2EDuration="15.985123256s" podCreationTimestamp="2025-12-09 15:09:25 +0000 UTC" firstStartedPulling="2025-12-09 15:09:26.431851188 +0000 UTC m=+814.155556077" lastFinishedPulling="2025-12-09 15:09:40.22556961 +0000 UTC m=+827.949274499" observedRunningTime="2025-12-09 15:09:40.984838938 +0000 UTC m=+828.708543837" watchObservedRunningTime="2025-12-09 15:09:40.985123256 +0000 UTC m=+828.708828145" Dec 09 15:09:41 crc kubenswrapper[5107]: I1209 15:09:41.029408 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-7894b5b9b4-s8xvq" podStartSLOduration=14.029383192 podStartE2EDuration="14.029383192s" podCreationTimestamp="2025-12-09 15:09:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 15:09:41.019646602 +0000 UTC m=+828.743351491" watchObservedRunningTime="2025-12-09 15:09:41.029383192 +0000 UTC m=+828.753088081" Dec 09 15:09:42 crc kubenswrapper[5107]: I1209 15:09:42.815186 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zcppf" podStartSLOduration=18.094577957 podStartE2EDuration="18.815159993s" podCreationTimestamp="2025-12-09 15:09:24 +0000 UTC" firstStartedPulling="2025-12-09 15:09:25.667819068 +0000 UTC m=+813.391523957" lastFinishedPulling="2025-12-09 15:09:26.388401104 +0000 UTC m=+814.112105993" observedRunningTime="2025-12-09 15:09:41.061852041 +0000 UTC m=+828.785556930" watchObservedRunningTime="2025-12-09 15:09:42.815159993 +0000 UTC m=+830.538864892" Dec 09 15:09:42 crc kubenswrapper[5107]: I1209 15:09:42.816425 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858d87f86b-xkvpx"] Dec 09 15:09:42 crc kubenswrapper[5107]: I1209 15:09:42.884697 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-xkvpx" Dec 09 15:09:42 crc kubenswrapper[5107]: I1209 15:09:42.886988 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-gcb6d\"" Dec 09 15:09:42 crc kubenswrapper[5107]: I1209 15:09:42.896359 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-xkvpx"] Dec 09 15:09:42 crc kubenswrapper[5107]: I1209 15:09:42.926526 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4201ab2c-b950-45a0-9f0d-2a1de42d81f3-bound-sa-token\") pod \"cert-manager-858d87f86b-xkvpx\" (UID: \"4201ab2c-b950-45a0-9f0d-2a1de42d81f3\") " pod="cert-manager/cert-manager-858d87f86b-xkvpx" Dec 09 15:09:42 crc kubenswrapper[5107]: I1209 15:09:42.926603 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzgwn\" (UniqueName: \"kubernetes.io/projected/4201ab2c-b950-45a0-9f0d-2a1de42d81f3-kube-api-access-nzgwn\") pod \"cert-manager-858d87f86b-xkvpx\" (UID: \"4201ab2c-b950-45a0-9f0d-2a1de42d81f3\") " pod="cert-manager/cert-manager-858d87f86b-xkvpx" Dec 09 15:09:43 crc kubenswrapper[5107]: I1209 15:09:43.028494 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4201ab2c-b950-45a0-9f0d-2a1de42d81f3-bound-sa-token\") pod \"cert-manager-858d87f86b-xkvpx\" (UID: \"4201ab2c-b950-45a0-9f0d-2a1de42d81f3\") " pod="cert-manager/cert-manager-858d87f86b-xkvpx" Dec 09 15:09:43 crc kubenswrapper[5107]: I1209 15:09:43.029027 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nzgwn\" (UniqueName: \"kubernetes.io/projected/4201ab2c-b950-45a0-9f0d-2a1de42d81f3-kube-api-access-nzgwn\") pod \"cert-manager-858d87f86b-xkvpx\" (UID: \"4201ab2c-b950-45a0-9f0d-2a1de42d81f3\") " pod="cert-manager/cert-manager-858d87f86b-xkvpx" Dec 09 15:09:43 crc kubenswrapper[5107]: I1209 15:09:43.062148 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4201ab2c-b950-45a0-9f0d-2a1de42d81f3-bound-sa-token\") pod \"cert-manager-858d87f86b-xkvpx\" (UID: \"4201ab2c-b950-45a0-9f0d-2a1de42d81f3\") " pod="cert-manager/cert-manager-858d87f86b-xkvpx" Dec 09 15:09:43 crc kubenswrapper[5107]: I1209 15:09:43.062242 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzgwn\" (UniqueName: \"kubernetes.io/projected/4201ab2c-b950-45a0-9f0d-2a1de42d81f3-kube-api-access-nzgwn\") pod \"cert-manager-858d87f86b-xkvpx\" (UID: \"4201ab2c-b950-45a0-9f0d-2a1de42d81f3\") " pod="cert-manager/cert-manager-858d87f86b-xkvpx" Dec 09 15:09:43 crc kubenswrapper[5107]: I1209 15:09:43.211295 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-xkvpx" Dec 09 15:09:43 crc kubenswrapper[5107]: I1209 15:09:43.491216 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-xkvpx"] Dec 09 15:09:43 crc kubenswrapper[5107]: W1209 15:09:43.498499 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4201ab2c_b950_45a0_9f0d_2a1de42d81f3.slice/crio-158e360e9c94348b2411c226fb385ed34c32a1396b9efdc6b99c73be96c86cdd WatchSource:0}: Error finding container 158e360e9c94348b2411c226fb385ed34c32a1396b9efdc6b99c73be96c86cdd: Status 404 returned error can't find the container with id 158e360e9c94348b2411c226fb385ed34c32a1396b9efdc6b99c73be96c86cdd Dec 09 15:09:43 crc kubenswrapper[5107]: I1209 15:09:43.969007 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-xkvpx" event={"ID":"4201ab2c-b950-45a0-9f0d-2a1de42d81f3","Type":"ContainerStarted","Data":"158e360e9c94348b2411c226fb385ed34c32a1396b9efdc6b99c73be96c86cdd"} Dec 09 15:09:44 crc kubenswrapper[5107]: I1209 15:09:44.154303 5107 patch_prober.go:28] interesting pod/machine-config-daemon-9jq8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:09:44 crc kubenswrapper[5107]: I1209 15:09:44.154430 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" podUID="902902bc-6dc6-4c5f-8e1b-9399b7c813c7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:09:44 crc kubenswrapper[5107]: I1209 15:09:44.154475 5107 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" Dec 09 15:09:44 crc kubenswrapper[5107]: I1209 15:09:44.367870 5107 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"be297312e5ca9ef320955bd5db7e8e291d1e5bad441d948b03d47094da2e57b8"} pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 15:09:44 crc kubenswrapper[5107]: I1209 15:09:44.368319 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" podUID="902902bc-6dc6-4c5f-8e1b-9399b7c813c7" containerName="machine-config-daemon" containerID="cri-o://be297312e5ca9ef320955bd5db7e8e291d1e5bad441d948b03d47094da2e57b8" gracePeriod=600 Dec 09 15:09:44 crc kubenswrapper[5107]: I1209 15:09:44.860767 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zcppf" Dec 09 15:09:44 crc kubenswrapper[5107]: I1209 15:09:44.860829 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-zcppf" Dec 09 15:09:44 crc kubenswrapper[5107]: I1209 15:09:44.908273 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zcppf" Dec 09 15:09:44 crc kubenswrapper[5107]: I1209 15:09:44.977478 5107 generic.go:358] "Generic (PLEG): container finished" podID="902902bc-6dc6-4c5f-8e1b-9399b7c813c7" containerID="be297312e5ca9ef320955bd5db7e8e291d1e5bad441d948b03d47094da2e57b8" exitCode=0 Dec 09 15:09:44 crc kubenswrapper[5107]: I1209 15:09:44.977663 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" event={"ID":"902902bc-6dc6-4c5f-8e1b-9399b7c813c7","Type":"ContainerDied","Data":"be297312e5ca9ef320955bd5db7e8e291d1e5bad441d948b03d47094da2e57b8"} Dec 09 15:09:44 crc kubenswrapper[5107]: I1209 15:09:44.977747 5107 scope.go:117] "RemoveContainer" containerID="f703877f2270bc9edf98fb7d33fe97e11bfba514d68412fac7acd3c7d8621675" Dec 09 15:09:44 crc kubenswrapper[5107]: I1209 15:09:44.980558 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"021cfe9c-c8db-460a-9dab-adf2942330e2","Type":"ContainerStarted","Data":"5f31f323e29f580d1efbac5986521d892748a36fd084d2af33de3f37ddaaba7c"} Dec 09 15:09:44 crc kubenswrapper[5107]: I1209 15:09:44.980705 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-1-build" podUID="021cfe9c-c8db-460a-9dab-adf2942330e2" containerName="manage-dockerfile" containerID="cri-o://5f31f323e29f580d1efbac5986521d892748a36fd084d2af33de3f37ddaaba7c" gracePeriod=30 Dec 09 15:09:44 crc kubenswrapper[5107]: I1209 15:09:44.982640 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-xkvpx" event={"ID":"4201ab2c-b950-45a0-9f0d-2a1de42d81f3","Type":"ContainerStarted","Data":"0daf2e46377584cccbebe606af093d332943006d971b203b6a082fd18d663772"} Dec 09 15:09:44 crc kubenswrapper[5107]: I1209 15:09:44.984918 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"36d11c80-886f-4fa1-bfd4-2f94719344e3","Type":"ContainerStarted","Data":"985a4974f23af3504ed1157e1b562429500b8ab473cb9a9f10c94b8ede73c567"} Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.032668 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858d87f86b-xkvpx" podStartSLOduration=3.032650784 podStartE2EDuration="3.032650784s" podCreationTimestamp="2025-12-09 15:09:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 15:09:45.031898374 +0000 UTC m=+832.755603263" watchObservedRunningTime="2025-12-09 15:09:45.032650784 +0000 UTC m=+832.756355673" Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.129626 5107 ???:1] "http: TLS handshake error from 192.168.126.11:53686: no serving certificate available for the kubelet" Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.450171 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_021cfe9c-c8db-460a-9dab-adf2942330e2/manage-dockerfile/0.log" Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.450526 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.593199 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/021cfe9c-c8db-460a-9dab-adf2942330e2-container-storage-root\") pod \"021cfe9c-c8db-460a-9dab-adf2942330e2\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.593272 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/021cfe9c-c8db-460a-9dab-adf2942330e2-buildworkdir\") pod \"021cfe9c-c8db-460a-9dab-adf2942330e2\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.593298 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/021cfe9c-c8db-460a-9dab-adf2942330e2-build-system-configs\") pod \"021cfe9c-c8db-460a-9dab-adf2942330e2\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.593415 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/021cfe9c-c8db-460a-9dab-adf2942330e2-build-proxy-ca-bundles\") pod \"021cfe9c-c8db-460a-9dab-adf2942330e2\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.593454 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/021cfe9c-c8db-460a-9dab-adf2942330e2-buildcachedir\") pod \"021cfe9c-c8db-460a-9dab-adf2942330e2\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.593476 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/021cfe9c-c8db-460a-9dab-adf2942330e2-build-ca-bundles\") pod \"021cfe9c-c8db-460a-9dab-adf2942330e2\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.593508 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/021cfe9c-c8db-460a-9dab-adf2942330e2-build-blob-cache\") pod \"021cfe9c-c8db-460a-9dab-adf2942330e2\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.593550 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-9pqzk-pull\" (UniqueName: \"kubernetes.io/secret/021cfe9c-c8db-460a-9dab-adf2942330e2-builder-dockercfg-9pqzk-pull\") pod \"021cfe9c-c8db-460a-9dab-adf2942330e2\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.593567 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/021cfe9c-c8db-460a-9dab-adf2942330e2-node-pullsecrets\") pod \"021cfe9c-c8db-460a-9dab-adf2942330e2\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.593611 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/021cfe9c-c8db-460a-9dab-adf2942330e2-container-storage-run\") pod \"021cfe9c-c8db-460a-9dab-adf2942330e2\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.593656 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zf8k\" (UniqueName: \"kubernetes.io/projected/021cfe9c-c8db-460a-9dab-adf2942330e2-kube-api-access-9zf8k\") pod \"021cfe9c-c8db-460a-9dab-adf2942330e2\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.593672 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-9pqzk-push\" (UniqueName: \"kubernetes.io/secret/021cfe9c-c8db-460a-9dab-adf2942330e2-builder-dockercfg-9pqzk-push\") pod \"021cfe9c-c8db-460a-9dab-adf2942330e2\" (UID: \"021cfe9c-c8db-460a-9dab-adf2942330e2\") " Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.594092 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/021cfe9c-c8db-460a-9dab-adf2942330e2-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "021cfe9c-c8db-460a-9dab-adf2942330e2" (UID: "021cfe9c-c8db-460a-9dab-adf2942330e2"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.594205 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/021cfe9c-c8db-460a-9dab-adf2942330e2-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "021cfe9c-c8db-460a-9dab-adf2942330e2" (UID: "021cfe9c-c8db-460a-9dab-adf2942330e2"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.594206 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/021cfe9c-c8db-460a-9dab-adf2942330e2-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "021cfe9c-c8db-460a-9dab-adf2942330e2" (UID: "021cfe9c-c8db-460a-9dab-adf2942330e2"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.594478 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/021cfe9c-c8db-460a-9dab-adf2942330e2-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "021cfe9c-c8db-460a-9dab-adf2942330e2" (UID: "021cfe9c-c8db-460a-9dab-adf2942330e2"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.594513 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/021cfe9c-c8db-460a-9dab-adf2942330e2-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "021cfe9c-c8db-460a-9dab-adf2942330e2" (UID: "021cfe9c-c8db-460a-9dab-adf2942330e2"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.594693 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/021cfe9c-c8db-460a-9dab-adf2942330e2-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "021cfe9c-c8db-460a-9dab-adf2942330e2" (UID: "021cfe9c-c8db-460a-9dab-adf2942330e2"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.594731 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/021cfe9c-c8db-460a-9dab-adf2942330e2-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "021cfe9c-c8db-460a-9dab-adf2942330e2" (UID: "021cfe9c-c8db-460a-9dab-adf2942330e2"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.594768 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/021cfe9c-c8db-460a-9dab-adf2942330e2-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "021cfe9c-c8db-460a-9dab-adf2942330e2" (UID: "021cfe9c-c8db-460a-9dab-adf2942330e2"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.595001 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/021cfe9c-c8db-460a-9dab-adf2942330e2-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "021cfe9c-c8db-460a-9dab-adf2942330e2" (UID: "021cfe9c-c8db-460a-9dab-adf2942330e2"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.676110 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/021cfe9c-c8db-460a-9dab-adf2942330e2-builder-dockercfg-9pqzk-push" (OuterVolumeSpecName: "builder-dockercfg-9pqzk-push") pod "021cfe9c-c8db-460a-9dab-adf2942330e2" (UID: "021cfe9c-c8db-460a-9dab-adf2942330e2"). InnerVolumeSpecName "builder-dockercfg-9pqzk-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.676249 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/021cfe9c-c8db-460a-9dab-adf2942330e2-kube-api-access-9zf8k" (OuterVolumeSpecName: "kube-api-access-9zf8k") pod "021cfe9c-c8db-460a-9dab-adf2942330e2" (UID: "021cfe9c-c8db-460a-9dab-adf2942330e2"). InnerVolumeSpecName "kube-api-access-9zf8k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.676900 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/021cfe9c-c8db-460a-9dab-adf2942330e2-builder-dockercfg-9pqzk-pull" (OuterVolumeSpecName: "builder-dockercfg-9pqzk-pull") pod "021cfe9c-c8db-460a-9dab-adf2942330e2" (UID: "021cfe9c-c8db-460a-9dab-adf2942330e2"). InnerVolumeSpecName "builder-dockercfg-9pqzk-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.695287 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/021cfe9c-c8db-460a-9dab-adf2942330e2-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.695324 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9zf8k\" (UniqueName: \"kubernetes.io/projected/021cfe9c-c8db-460a-9dab-adf2942330e2-kube-api-access-9zf8k\") on node \"crc\" DevicePath \"\"" Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.695365 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-9pqzk-push\" (UniqueName: \"kubernetes.io/secret/021cfe9c-c8db-460a-9dab-adf2942330e2-builder-dockercfg-9pqzk-push\") on node \"crc\" DevicePath \"\"" Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.695379 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/021cfe9c-c8db-460a-9dab-adf2942330e2-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.695393 5107 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/021cfe9c-c8db-460a-9dab-adf2942330e2-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.695406 5107 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/021cfe9c-c8db-460a-9dab-adf2942330e2-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.695416 5107 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/021cfe9c-c8db-460a-9dab-adf2942330e2-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.695426 5107 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/021cfe9c-c8db-460a-9dab-adf2942330e2-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.695436 5107 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/021cfe9c-c8db-460a-9dab-adf2942330e2-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.695446 5107 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/021cfe9c-c8db-460a-9dab-adf2942330e2-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.695457 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-9pqzk-pull\" (UniqueName: \"kubernetes.io/secret/021cfe9c-c8db-460a-9dab-adf2942330e2-builder-dockercfg-9pqzk-pull\") on node \"crc\" DevicePath \"\"" Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.695468 5107 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/021cfe9c-c8db-460a-9dab-adf2942330e2-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.994019 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" event={"ID":"902902bc-6dc6-4c5f-8e1b-9399b7c813c7","Type":"ContainerStarted","Data":"803bbf6bec51d832e2da8a834695540d5db512be8f49d6c1ef0e6c6c554fc66e"} Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.996080 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_021cfe9c-c8db-460a-9dab-adf2942330e2/manage-dockerfile/0.log" Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.996132 5107 generic.go:358] "Generic (PLEG): container finished" podID="021cfe9c-c8db-460a-9dab-adf2942330e2" containerID="5f31f323e29f580d1efbac5986521d892748a36fd084d2af33de3f37ddaaba7c" exitCode=1 Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.996549 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.996688 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"021cfe9c-c8db-460a-9dab-adf2942330e2","Type":"ContainerDied","Data":"5f31f323e29f580d1efbac5986521d892748a36fd084d2af33de3f37ddaaba7c"} Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.996740 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"021cfe9c-c8db-460a-9dab-adf2942330e2","Type":"ContainerDied","Data":"7d137bc3b1ebe9c44e5c540448720b90f4706497f31a77e2372ca38e9cf979aa"} Dec 09 15:09:45 crc kubenswrapper[5107]: I1209 15:09:45.996769 5107 scope.go:117] "RemoveContainer" containerID="5f31f323e29f580d1efbac5986521d892748a36fd084d2af33de3f37ddaaba7c" Dec 09 15:09:46 crc kubenswrapper[5107]: I1209 15:09:46.020384 5107 scope.go:117] "RemoveContainer" containerID="5f31f323e29f580d1efbac5986521d892748a36fd084d2af33de3f37ddaaba7c" Dec 09 15:09:46 crc kubenswrapper[5107]: E1209 15:09:46.020733 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f31f323e29f580d1efbac5986521d892748a36fd084d2af33de3f37ddaaba7c\": container with ID starting with 5f31f323e29f580d1efbac5986521d892748a36fd084d2af33de3f37ddaaba7c not found: ID does not exist" containerID="5f31f323e29f580d1efbac5986521d892748a36fd084d2af33de3f37ddaaba7c" Dec 09 15:09:46 crc kubenswrapper[5107]: I1209 15:09:46.020765 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f31f323e29f580d1efbac5986521d892748a36fd084d2af33de3f37ddaaba7c"} err="failed to get container status \"5f31f323e29f580d1efbac5986521d892748a36fd084d2af33de3f37ddaaba7c\": rpc error: code = NotFound desc = could not find container \"5f31f323e29f580d1efbac5986521d892748a36fd084d2af33de3f37ddaaba7c\": container with ID starting with 5f31f323e29f580d1efbac5986521d892748a36fd084d2af33de3f37ddaaba7c not found: ID does not exist" Dec 09 15:09:46 crc kubenswrapper[5107]: I1209 15:09:46.033544 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 09 15:09:46 crc kubenswrapper[5107]: I1209 15:09:46.038271 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 09 15:09:46 crc kubenswrapper[5107]: I1209 15:09:46.161582 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 09 15:09:46 crc kubenswrapper[5107]: I1209 15:09:46.268329 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 09 15:09:46 crc kubenswrapper[5107]: I1209 15:09:46.826028 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="021cfe9c-c8db-460a-9dab-adf2942330e2" path="/var/lib/kubelet/pods/021cfe9c-c8db-460a-9dab-adf2942330e2/volumes" Dec 09 15:09:47 crc kubenswrapper[5107]: I1209 15:09:47.003097 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-2-build" podUID="36d11c80-886f-4fa1-bfd4-2f94719344e3" containerName="git-clone" containerID="cri-o://985a4974f23af3504ed1157e1b562429500b8ab473cb9a9f10c94b8ede73c567" gracePeriod=30 Dec 09 15:09:47 crc kubenswrapper[5107]: I1209 15:09:47.480251 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_36d11c80-886f-4fa1-bfd4-2f94719344e3/git-clone/0.log" Dec 09 15:09:47 crc kubenswrapper[5107]: I1209 15:09:47.480536 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 15:09:47 crc kubenswrapper[5107]: I1209 15:09:47.622457 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/36d11c80-886f-4fa1-bfd4-2f94719344e3-container-storage-run\") pod \"36d11c80-886f-4fa1-bfd4-2f94719344e3\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " Dec 09 15:09:47 crc kubenswrapper[5107]: I1209 15:09:47.622528 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/36d11c80-886f-4fa1-bfd4-2f94719344e3-build-ca-bundles\") pod \"36d11c80-886f-4fa1-bfd4-2f94719344e3\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " Dec 09 15:09:47 crc kubenswrapper[5107]: I1209 15:09:47.622555 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/36d11c80-886f-4fa1-bfd4-2f94719344e3-buildworkdir\") pod \"36d11c80-886f-4fa1-bfd4-2f94719344e3\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " Dec 09 15:09:47 crc kubenswrapper[5107]: I1209 15:09:47.622588 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/36d11c80-886f-4fa1-bfd4-2f94719344e3-build-system-configs\") pod \"36d11c80-886f-4fa1-bfd4-2f94719344e3\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " Dec 09 15:09:47 crc kubenswrapper[5107]: I1209 15:09:47.622620 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/36d11c80-886f-4fa1-bfd4-2f94719344e3-buildcachedir\") pod \"36d11c80-886f-4fa1-bfd4-2f94719344e3\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " Dec 09 15:09:47 crc kubenswrapper[5107]: I1209 15:09:47.622645 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/36d11c80-886f-4fa1-bfd4-2f94719344e3-node-pullsecrets\") pod \"36d11c80-886f-4fa1-bfd4-2f94719344e3\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " Dec 09 15:09:47 crc kubenswrapper[5107]: I1209 15:09:47.622715 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkwzr\" (UniqueName: \"kubernetes.io/projected/36d11c80-886f-4fa1-bfd4-2f94719344e3-kube-api-access-dkwzr\") pod \"36d11c80-886f-4fa1-bfd4-2f94719344e3\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " Dec 09 15:09:47 crc kubenswrapper[5107]: I1209 15:09:47.622766 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-9pqzk-push\" (UniqueName: \"kubernetes.io/secret/36d11c80-886f-4fa1-bfd4-2f94719344e3-builder-dockercfg-9pqzk-push\") pod \"36d11c80-886f-4fa1-bfd4-2f94719344e3\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " Dec 09 15:09:47 crc kubenswrapper[5107]: I1209 15:09:47.622814 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/36d11c80-886f-4fa1-bfd4-2f94719344e3-build-proxy-ca-bundles\") pod \"36d11c80-886f-4fa1-bfd4-2f94719344e3\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " Dec 09 15:09:47 crc kubenswrapper[5107]: I1209 15:09:47.622831 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/36d11c80-886f-4fa1-bfd4-2f94719344e3-container-storage-root\") pod \"36d11c80-886f-4fa1-bfd4-2f94719344e3\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " Dec 09 15:09:47 crc kubenswrapper[5107]: I1209 15:09:47.622881 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/36d11c80-886f-4fa1-bfd4-2f94719344e3-build-blob-cache\") pod \"36d11c80-886f-4fa1-bfd4-2f94719344e3\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " Dec 09 15:09:47 crc kubenswrapper[5107]: I1209 15:09:47.622921 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-9pqzk-pull\" (UniqueName: \"kubernetes.io/secret/36d11c80-886f-4fa1-bfd4-2f94719344e3-builder-dockercfg-9pqzk-pull\") pod \"36d11c80-886f-4fa1-bfd4-2f94719344e3\" (UID: \"36d11c80-886f-4fa1-bfd4-2f94719344e3\") " Dec 09 15:09:47 crc kubenswrapper[5107]: I1209 15:09:47.624365 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36d11c80-886f-4fa1-bfd4-2f94719344e3-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "36d11c80-886f-4fa1-bfd4-2f94719344e3" (UID: "36d11c80-886f-4fa1-bfd4-2f94719344e3"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 15:09:47 crc kubenswrapper[5107]: I1209 15:09:47.624537 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36d11c80-886f-4fa1-bfd4-2f94719344e3-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "36d11c80-886f-4fa1-bfd4-2f94719344e3" (UID: "36d11c80-886f-4fa1-bfd4-2f94719344e3"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 15:09:47 crc kubenswrapper[5107]: I1209 15:09:47.624619 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36d11c80-886f-4fa1-bfd4-2f94719344e3-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "36d11c80-886f-4fa1-bfd4-2f94719344e3" (UID: "36d11c80-886f-4fa1-bfd4-2f94719344e3"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:09:47 crc kubenswrapper[5107]: I1209 15:09:47.627490 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36d11c80-886f-4fa1-bfd4-2f94719344e3-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "36d11c80-886f-4fa1-bfd4-2f94719344e3" (UID: "36d11c80-886f-4fa1-bfd4-2f94719344e3"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:09:47 crc kubenswrapper[5107]: I1209 15:09:47.627683 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36d11c80-886f-4fa1-bfd4-2f94719344e3-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "36d11c80-886f-4fa1-bfd4-2f94719344e3" (UID: "36d11c80-886f-4fa1-bfd4-2f94719344e3"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:09:47 crc kubenswrapper[5107]: I1209 15:09:47.627907 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36d11c80-886f-4fa1-bfd4-2f94719344e3-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "36d11c80-886f-4fa1-bfd4-2f94719344e3" (UID: "36d11c80-886f-4fa1-bfd4-2f94719344e3"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:09:47 crc kubenswrapper[5107]: I1209 15:09:47.628154 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36d11c80-886f-4fa1-bfd4-2f94719344e3-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "36d11c80-886f-4fa1-bfd4-2f94719344e3" (UID: "36d11c80-886f-4fa1-bfd4-2f94719344e3"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 15:09:47 crc kubenswrapper[5107]: I1209 15:09:47.628292 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36d11c80-886f-4fa1-bfd4-2f94719344e3-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "36d11c80-886f-4fa1-bfd4-2f94719344e3" (UID: "36d11c80-886f-4fa1-bfd4-2f94719344e3"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 15:09:47 crc kubenswrapper[5107]: I1209 15:09:47.629035 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36d11c80-886f-4fa1-bfd4-2f94719344e3-builder-dockercfg-9pqzk-pull" (OuterVolumeSpecName: "builder-dockercfg-9pqzk-pull") pod "36d11c80-886f-4fa1-bfd4-2f94719344e3" (UID: "36d11c80-886f-4fa1-bfd4-2f94719344e3"). InnerVolumeSpecName "builder-dockercfg-9pqzk-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 15:09:47 crc kubenswrapper[5107]: I1209 15:09:47.630927 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36d11c80-886f-4fa1-bfd4-2f94719344e3-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "36d11c80-886f-4fa1-bfd4-2f94719344e3" (UID: "36d11c80-886f-4fa1-bfd4-2f94719344e3"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 15:09:47 crc kubenswrapper[5107]: I1209 15:09:47.631814 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36d11c80-886f-4fa1-bfd4-2f94719344e3-kube-api-access-dkwzr" (OuterVolumeSpecName: "kube-api-access-dkwzr") pod "36d11c80-886f-4fa1-bfd4-2f94719344e3" (UID: "36d11c80-886f-4fa1-bfd4-2f94719344e3"). InnerVolumeSpecName "kube-api-access-dkwzr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 15:09:47 crc kubenswrapper[5107]: I1209 15:09:47.633163 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36d11c80-886f-4fa1-bfd4-2f94719344e3-builder-dockercfg-9pqzk-push" (OuterVolumeSpecName: "builder-dockercfg-9pqzk-push") pod "36d11c80-886f-4fa1-bfd4-2f94719344e3" (UID: "36d11c80-886f-4fa1-bfd4-2f94719344e3"). InnerVolumeSpecName "builder-dockercfg-9pqzk-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 15:09:47 crc kubenswrapper[5107]: I1209 15:09:47.724125 5107 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/36d11c80-886f-4fa1-bfd4-2f94719344e3-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 09 15:09:47 crc kubenswrapper[5107]: I1209 15:09:47.724172 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-9pqzk-pull\" (UniqueName: \"kubernetes.io/secret/36d11c80-886f-4fa1-bfd4-2f94719344e3-builder-dockercfg-9pqzk-pull\") on node \"crc\" DevicePath \"\"" Dec 09 15:09:47 crc kubenswrapper[5107]: I1209 15:09:47.724187 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/36d11c80-886f-4fa1-bfd4-2f94719344e3-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 09 15:09:47 crc kubenswrapper[5107]: I1209 15:09:47.724195 5107 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/36d11c80-886f-4fa1-bfd4-2f94719344e3-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 15:09:47 crc kubenswrapper[5107]: I1209 15:09:47.724206 5107 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/36d11c80-886f-4fa1-bfd4-2f94719344e3-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 09 15:09:47 crc kubenswrapper[5107]: I1209 15:09:47.724214 5107 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/36d11c80-886f-4fa1-bfd4-2f94719344e3-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 09 15:09:47 crc kubenswrapper[5107]: I1209 15:09:47.724221 5107 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/36d11c80-886f-4fa1-bfd4-2f94719344e3-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 09 15:09:47 crc kubenswrapper[5107]: I1209 15:09:47.724228 5107 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/36d11c80-886f-4fa1-bfd4-2f94719344e3-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 09 15:09:47 crc kubenswrapper[5107]: I1209 15:09:47.724238 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dkwzr\" (UniqueName: \"kubernetes.io/projected/36d11c80-886f-4fa1-bfd4-2f94719344e3-kube-api-access-dkwzr\") on node \"crc\" DevicePath \"\"" Dec 09 15:09:47 crc kubenswrapper[5107]: I1209 15:09:47.724248 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-9pqzk-push\" (UniqueName: \"kubernetes.io/secret/36d11c80-886f-4fa1-bfd4-2f94719344e3-builder-dockercfg-9pqzk-push\") on node \"crc\" DevicePath \"\"" Dec 09 15:09:47 crc kubenswrapper[5107]: I1209 15:09:47.724258 5107 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/36d11c80-886f-4fa1-bfd4-2f94719344e3-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 15:09:47 crc kubenswrapper[5107]: I1209 15:09:47.724269 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/36d11c80-886f-4fa1-bfd4-2f94719344e3-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 09 15:09:47 crc kubenswrapper[5107]: I1209 15:09:47.964045 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-s8xvq" Dec 09 15:09:48 crc kubenswrapper[5107]: I1209 15:09:48.011526 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_36d11c80-886f-4fa1-bfd4-2f94719344e3/git-clone/0.log" Dec 09 15:09:48 crc kubenswrapper[5107]: I1209 15:09:48.011581 5107 generic.go:358] "Generic (PLEG): container finished" podID="36d11c80-886f-4fa1-bfd4-2f94719344e3" containerID="985a4974f23af3504ed1157e1b562429500b8ab473cb9a9f10c94b8ede73c567" exitCode=1 Dec 09 15:09:48 crc kubenswrapper[5107]: I1209 15:09:48.011732 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 15:09:48 crc kubenswrapper[5107]: I1209 15:09:48.011713 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"36d11c80-886f-4fa1-bfd4-2f94719344e3","Type":"ContainerDied","Data":"985a4974f23af3504ed1157e1b562429500b8ab473cb9a9f10c94b8ede73c567"} Dec 09 15:09:48 crc kubenswrapper[5107]: I1209 15:09:48.011907 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"36d11c80-886f-4fa1-bfd4-2f94719344e3","Type":"ContainerDied","Data":"faf46227ae306d7772223c963bf47b66cdcdffe02232c615e51e9a7d294fe7f0"} Dec 09 15:09:48 crc kubenswrapper[5107]: I1209 15:09:48.011923 5107 scope.go:117] "RemoveContainer" containerID="985a4974f23af3504ed1157e1b562429500b8ab473cb9a9f10c94b8ede73c567" Dec 09 15:09:48 crc kubenswrapper[5107]: I1209 15:09:48.038872 5107 scope.go:117] "RemoveContainer" containerID="985a4974f23af3504ed1157e1b562429500b8ab473cb9a9f10c94b8ede73c567" Dec 09 15:09:48 crc kubenswrapper[5107]: E1209 15:09:48.039255 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"985a4974f23af3504ed1157e1b562429500b8ab473cb9a9f10c94b8ede73c567\": container with ID starting with 985a4974f23af3504ed1157e1b562429500b8ab473cb9a9f10c94b8ede73c567 not found: ID does not exist" containerID="985a4974f23af3504ed1157e1b562429500b8ab473cb9a9f10c94b8ede73c567" Dec 09 15:09:48 crc kubenswrapper[5107]: I1209 15:09:48.039282 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"985a4974f23af3504ed1157e1b562429500b8ab473cb9a9f10c94b8ede73c567"} err="failed to get container status \"985a4974f23af3504ed1157e1b562429500b8ab473cb9a9f10c94b8ede73c567\": rpc error: code = NotFound desc = could not find container \"985a4974f23af3504ed1157e1b562429500b8ab473cb9a9f10c94b8ede73c567\": container with ID starting with 985a4974f23af3504ed1157e1b562429500b8ab473cb9a9f10c94b8ede73c567 not found: ID does not exist" Dec 09 15:09:48 crc kubenswrapper[5107]: I1209 15:09:48.054003 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 09 15:09:48 crc kubenswrapper[5107]: I1209 15:09:48.061666 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 09 15:09:48 crc kubenswrapper[5107]: I1209 15:09:48.826089 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36d11c80-886f-4fa1-bfd4-2f94719344e3" path="/var/lib/kubelet/pods/36d11c80-886f-4fa1-bfd4-2f94719344e3/volumes" Dec 09 15:09:55 crc kubenswrapper[5107]: I1209 15:09:55.039961 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zcppf" Dec 09 15:09:55 crc kubenswrapper[5107]: I1209 15:09:55.083494 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zcppf"] Dec 09 15:09:55 crc kubenswrapper[5107]: I1209 15:09:55.083761 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zcppf" podUID="fe81a41c-5ec3-4524-928f-c0c332366270" containerName="registry-server" containerID="cri-o://5753911fe9412e304d1c748c12f0672842fc78be2c6754be0d075a9b8a238204" gracePeriod=2 Dec 09 15:09:56 crc kubenswrapper[5107]: I1209 15:09:56.074778 5107 generic.go:358] "Generic (PLEG): container finished" podID="fe81a41c-5ec3-4524-928f-c0c332366270" containerID="5753911fe9412e304d1c748c12f0672842fc78be2c6754be0d075a9b8a238204" exitCode=0 Dec 09 15:09:56 crc kubenswrapper[5107]: I1209 15:09:56.074866 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zcppf" event={"ID":"fe81a41c-5ec3-4524-928f-c0c332366270","Type":"ContainerDied","Data":"5753911fe9412e304d1c748c12f0672842fc78be2c6754be0d075a9b8a238204"} Dec 09 15:09:56 crc kubenswrapper[5107]: I1209 15:09:56.075247 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zcppf" event={"ID":"fe81a41c-5ec3-4524-928f-c0c332366270","Type":"ContainerDied","Data":"3bcd7d0592bf18c03ab491ff9c9e9ee9a8c09074398506d9ea1f037dedefc7dd"} Dec 09 15:09:56 crc kubenswrapper[5107]: I1209 15:09:56.075263 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bcd7d0592bf18c03ab491ff9c9e9ee9a8c09074398506d9ea1f037dedefc7dd" Dec 09 15:09:56 crc kubenswrapper[5107]: I1209 15:09:56.086941 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zcppf" Dec 09 15:09:56 crc kubenswrapper[5107]: I1209 15:09:56.241241 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g47xw\" (UniqueName: \"kubernetes.io/projected/fe81a41c-5ec3-4524-928f-c0c332366270-kube-api-access-g47xw\") pod \"fe81a41c-5ec3-4524-928f-c0c332366270\" (UID: \"fe81a41c-5ec3-4524-928f-c0c332366270\") " Dec 09 15:09:56 crc kubenswrapper[5107]: I1209 15:09:56.241582 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe81a41c-5ec3-4524-928f-c0c332366270-utilities\") pod \"fe81a41c-5ec3-4524-928f-c0c332366270\" (UID: \"fe81a41c-5ec3-4524-928f-c0c332366270\") " Dec 09 15:09:56 crc kubenswrapper[5107]: I1209 15:09:56.241825 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe81a41c-5ec3-4524-928f-c0c332366270-catalog-content\") pod \"fe81a41c-5ec3-4524-928f-c0c332366270\" (UID: \"fe81a41c-5ec3-4524-928f-c0c332366270\") " Dec 09 15:09:56 crc kubenswrapper[5107]: I1209 15:09:56.242697 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe81a41c-5ec3-4524-928f-c0c332366270-utilities" (OuterVolumeSpecName: "utilities") pod "fe81a41c-5ec3-4524-928f-c0c332366270" (UID: "fe81a41c-5ec3-4524-928f-c0c332366270"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:09:56 crc kubenswrapper[5107]: I1209 15:09:56.248232 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe81a41c-5ec3-4524-928f-c0c332366270-kube-api-access-g47xw" (OuterVolumeSpecName: "kube-api-access-g47xw") pod "fe81a41c-5ec3-4524-928f-c0c332366270" (UID: "fe81a41c-5ec3-4524-928f-c0c332366270"). InnerVolumeSpecName "kube-api-access-g47xw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 15:09:56 crc kubenswrapper[5107]: I1209 15:09:56.288108 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe81a41c-5ec3-4524-928f-c0c332366270-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fe81a41c-5ec3-4524-928f-c0c332366270" (UID: "fe81a41c-5ec3-4524-928f-c0c332366270"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:09:56 crc kubenswrapper[5107]: I1209 15:09:56.345466 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe81a41c-5ec3-4524-928f-c0c332366270-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 15:09:56 crc kubenswrapper[5107]: I1209 15:09:56.345509 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g47xw\" (UniqueName: \"kubernetes.io/projected/fe81a41c-5ec3-4524-928f-c0c332366270-kube-api-access-g47xw\") on node \"crc\" DevicePath \"\"" Dec 09 15:09:56 crc kubenswrapper[5107]: I1209 15:09:56.345523 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe81a41c-5ec3-4524-928f-c0c332366270-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.084023 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zcppf" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.102480 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zcppf"] Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.107367 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zcppf"] Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.675128 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.676009 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="36d11c80-886f-4fa1-bfd4-2f94719344e3" containerName="git-clone" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.676031 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="36d11c80-886f-4fa1-bfd4-2f94719344e3" containerName="git-clone" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.676049 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="021cfe9c-c8db-460a-9dab-adf2942330e2" containerName="manage-dockerfile" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.676054 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="021cfe9c-c8db-460a-9dab-adf2942330e2" containerName="manage-dockerfile" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.676064 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fe81a41c-5ec3-4524-928f-c0c332366270" containerName="extract-content" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.676069 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe81a41c-5ec3-4524-928f-c0c332366270" containerName="extract-content" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.676079 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fe81a41c-5ec3-4524-928f-c0c332366270" containerName="registry-server" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.676086 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe81a41c-5ec3-4524-928f-c0c332366270" containerName="registry-server" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.676097 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fe81a41c-5ec3-4524-928f-c0c332366270" containerName="extract-utilities" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.676103 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe81a41c-5ec3-4524-928f-c0c332366270" containerName="extract-utilities" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.676214 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="021cfe9c-c8db-460a-9dab-adf2942330e2" containerName="manage-dockerfile" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.676227 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="fe81a41c-5ec3-4524-928f-c0c332366270" containerName="registry-server" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.676236 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="36d11c80-886f-4fa1-bfd4-2f94719344e3" containerName="git-clone" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.705158 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.705352 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.707373 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-9pqzk\"" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.707918 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-3-ca\"" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.708580 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-3-sys-config\"" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.708771 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-3-global-ca\"" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.868399 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2e0de03a-453d-4ba7-97d0-654ab54ca200-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.868534 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2e0de03a-453d-4ba7-97d0-654ab54ca200-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.868572 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2e0de03a-453d-4ba7-97d0-654ab54ca200-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.868606 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2e0de03a-453d-4ba7-97d0-654ab54ca200-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.868627 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbzsw\" (UniqueName: \"kubernetes.io/projected/2e0de03a-453d-4ba7-97d0-654ab54ca200-kube-api-access-jbzsw\") pod \"service-telemetry-operator-3-build\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.868881 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2e0de03a-453d-4ba7-97d0-654ab54ca200-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.868959 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2e0de03a-453d-4ba7-97d0-654ab54ca200-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.869026 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-9pqzk-push\" (UniqueName: \"kubernetes.io/secret/2e0de03a-453d-4ba7-97d0-654ab54ca200-builder-dockercfg-9pqzk-push\") pod \"service-telemetry-operator-3-build\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.869050 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-9pqzk-pull\" (UniqueName: \"kubernetes.io/secret/2e0de03a-453d-4ba7-97d0-654ab54ca200-builder-dockercfg-9pqzk-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.869141 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e0de03a-453d-4ba7-97d0-654ab54ca200-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.869261 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2e0de03a-453d-4ba7-97d0-654ab54ca200-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.869401 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e0de03a-453d-4ba7-97d0-654ab54ca200-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.970352 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2e0de03a-453d-4ba7-97d0-654ab54ca200-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.970416 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2e0de03a-453d-4ba7-97d0-654ab54ca200-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.970457 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-9pqzk-push\" (UniqueName: \"kubernetes.io/secret/2e0de03a-453d-4ba7-97d0-654ab54ca200-builder-dockercfg-9pqzk-push\") pod \"service-telemetry-operator-3-build\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.970479 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-9pqzk-pull\" (UniqueName: \"kubernetes.io/secret/2e0de03a-453d-4ba7-97d0-654ab54ca200-builder-dockercfg-9pqzk-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.970517 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e0de03a-453d-4ba7-97d0-654ab54ca200-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.970733 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2e0de03a-453d-4ba7-97d0-654ab54ca200-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.970754 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e0de03a-453d-4ba7-97d0-654ab54ca200-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.970790 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2e0de03a-453d-4ba7-97d0-654ab54ca200-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.970882 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2e0de03a-453d-4ba7-97d0-654ab54ca200-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.970934 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2e0de03a-453d-4ba7-97d0-654ab54ca200-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.971164 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2e0de03a-453d-4ba7-97d0-654ab54ca200-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.971200 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2e0de03a-453d-4ba7-97d0-654ab54ca200-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.971164 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2e0de03a-453d-4ba7-97d0-654ab54ca200-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.971362 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2e0de03a-453d-4ba7-97d0-654ab54ca200-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.971458 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2e0de03a-453d-4ba7-97d0-654ab54ca200-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.971695 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2e0de03a-453d-4ba7-97d0-654ab54ca200-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.971757 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e0de03a-453d-4ba7-97d0-654ab54ca200-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.971821 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2e0de03a-453d-4ba7-97d0-654ab54ca200-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.972220 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e0de03a-453d-4ba7-97d0-654ab54ca200-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.972282 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2e0de03a-453d-4ba7-97d0-654ab54ca200-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.971851 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jbzsw\" (UniqueName: \"kubernetes.io/projected/2e0de03a-453d-4ba7-97d0-654ab54ca200-kube-api-access-jbzsw\") pod \"service-telemetry-operator-3-build\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.976993 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-9pqzk-push\" (UniqueName: \"kubernetes.io/secret/2e0de03a-453d-4ba7-97d0-654ab54ca200-builder-dockercfg-9pqzk-push\") pod \"service-telemetry-operator-3-build\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.977708 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-9pqzk-pull\" (UniqueName: \"kubernetes.io/secret/2e0de03a-453d-4ba7-97d0-654ab54ca200-builder-dockercfg-9pqzk-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 15:09:57 crc kubenswrapper[5107]: I1209 15:09:57.991404 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbzsw\" (UniqueName: \"kubernetes.io/projected/2e0de03a-453d-4ba7-97d0-654ab54ca200-kube-api-access-jbzsw\") pod \"service-telemetry-operator-3-build\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 15:09:58 crc kubenswrapper[5107]: I1209 15:09:58.018417 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 15:09:58 crc kubenswrapper[5107]: W1209 15:09:58.435423 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e0de03a_453d_4ba7_97d0_654ab54ca200.slice/crio-7f1a3319fbdb6a1f3a4f7ef73096e224829535c4d8d8656a0d7daad7cdc95e7b WatchSource:0}: Error finding container 7f1a3319fbdb6a1f3a4f7ef73096e224829535c4d8d8656a0d7daad7cdc95e7b: Status 404 returned error can't find the container with id 7f1a3319fbdb6a1f3a4f7ef73096e224829535c4d8d8656a0d7daad7cdc95e7b Dec 09 15:09:58 crc kubenswrapper[5107]: I1209 15:09:58.435618 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 09 15:09:58 crc kubenswrapper[5107]: I1209 15:09:58.826791 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe81a41c-5ec3-4524-928f-c0c332366270" path="/var/lib/kubelet/pods/fe81a41c-5ec3-4524-928f-c0c332366270/volumes" Dec 09 15:09:59 crc kubenswrapper[5107]: I1209 15:09:59.100055 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"2e0de03a-453d-4ba7-97d0-654ab54ca200","Type":"ContainerStarted","Data":"332bba27e709ff3cfabcff4575dd2e0ec5fd10a14474101dc9072537fea12873"} Dec 09 15:09:59 crc kubenswrapper[5107]: I1209 15:09:59.100112 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"2e0de03a-453d-4ba7-97d0-654ab54ca200","Type":"ContainerStarted","Data":"7f1a3319fbdb6a1f3a4f7ef73096e224829535c4d8d8656a0d7daad7cdc95e7b"} Dec 09 15:09:59 crc kubenswrapper[5107]: I1209 15:09:59.152938 5107 ???:1] "http: TLS handshake error from 192.168.126.11:44586: no serving certificate available for the kubelet" Dec 09 15:10:00 crc kubenswrapper[5107]: I1209 15:10:00.181246 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 09 15:10:01 crc kubenswrapper[5107]: I1209 15:10:01.114093 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-3-build" podUID="2e0de03a-453d-4ba7-97d0-654ab54ca200" containerName="git-clone" containerID="cri-o://332bba27e709ff3cfabcff4575dd2e0ec5fd10a14474101dc9072537fea12873" gracePeriod=30 Dec 09 15:10:01 crc kubenswrapper[5107]: I1209 15:10:01.569744 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_2e0de03a-453d-4ba7-97d0-654ab54ca200/git-clone/0.log" Dec 09 15:10:01 crc kubenswrapper[5107]: I1209 15:10:01.569819 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 15:10:01 crc kubenswrapper[5107]: I1209 15:10:01.624312 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2e0de03a-453d-4ba7-97d0-654ab54ca200-buildcachedir\") pod \"2e0de03a-453d-4ba7-97d0-654ab54ca200\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " Dec 09 15:10:01 crc kubenswrapper[5107]: I1209 15:10:01.624411 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2e0de03a-453d-4ba7-97d0-654ab54ca200-build-blob-cache\") pod \"2e0de03a-453d-4ba7-97d0-654ab54ca200\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " Dec 09 15:10:01 crc kubenswrapper[5107]: I1209 15:10:01.624427 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e0de03a-453d-4ba7-97d0-654ab54ca200-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "2e0de03a-453d-4ba7-97d0-654ab54ca200" (UID: "2e0de03a-453d-4ba7-97d0-654ab54ca200"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 15:10:01 crc kubenswrapper[5107]: I1209 15:10:01.624448 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2e0de03a-453d-4ba7-97d0-654ab54ca200-container-storage-run\") pod \"2e0de03a-453d-4ba7-97d0-654ab54ca200\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " Dec 09 15:10:01 crc kubenswrapper[5107]: I1209 15:10:01.624583 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2e0de03a-453d-4ba7-97d0-654ab54ca200-node-pullsecrets\") pod \"2e0de03a-453d-4ba7-97d0-654ab54ca200\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " Dec 09 15:10:01 crc kubenswrapper[5107]: I1209 15:10:01.624614 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2e0de03a-453d-4ba7-97d0-654ab54ca200-build-system-configs\") pod \"2e0de03a-453d-4ba7-97d0-654ab54ca200\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " Dec 09 15:10:01 crc kubenswrapper[5107]: I1209 15:10:01.624660 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e0de03a-453d-4ba7-97d0-654ab54ca200-build-proxy-ca-bundles\") pod \"2e0de03a-453d-4ba7-97d0-654ab54ca200\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " Dec 09 15:10:01 crc kubenswrapper[5107]: I1209 15:10:01.624705 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2e0de03a-453d-4ba7-97d0-654ab54ca200-buildworkdir\") pod \"2e0de03a-453d-4ba7-97d0-654ab54ca200\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " Dec 09 15:10:01 crc kubenswrapper[5107]: I1209 15:10:01.624731 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-9pqzk-push\" (UniqueName: \"kubernetes.io/secret/2e0de03a-453d-4ba7-97d0-654ab54ca200-builder-dockercfg-9pqzk-push\") pod \"2e0de03a-453d-4ba7-97d0-654ab54ca200\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " Dec 09 15:10:01 crc kubenswrapper[5107]: I1209 15:10:01.624757 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbzsw\" (UniqueName: \"kubernetes.io/projected/2e0de03a-453d-4ba7-97d0-654ab54ca200-kube-api-access-jbzsw\") pod \"2e0de03a-453d-4ba7-97d0-654ab54ca200\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " Dec 09 15:10:01 crc kubenswrapper[5107]: I1209 15:10:01.624770 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e0de03a-453d-4ba7-97d0-654ab54ca200-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "2e0de03a-453d-4ba7-97d0-654ab54ca200" (UID: "2e0de03a-453d-4ba7-97d0-654ab54ca200"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:10:01 crc kubenswrapper[5107]: I1209 15:10:01.624809 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e0de03a-453d-4ba7-97d0-654ab54ca200-build-ca-bundles\") pod \"2e0de03a-453d-4ba7-97d0-654ab54ca200\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " Dec 09 15:10:01 crc kubenswrapper[5107]: I1209 15:10:01.624842 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2e0de03a-453d-4ba7-97d0-654ab54ca200-container-storage-root\") pod \"2e0de03a-453d-4ba7-97d0-654ab54ca200\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " Dec 09 15:10:01 crc kubenswrapper[5107]: I1209 15:10:01.624898 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-9pqzk-pull\" (UniqueName: \"kubernetes.io/secret/2e0de03a-453d-4ba7-97d0-654ab54ca200-builder-dockercfg-9pqzk-pull\") pod \"2e0de03a-453d-4ba7-97d0-654ab54ca200\" (UID: \"2e0de03a-453d-4ba7-97d0-654ab54ca200\") " Dec 09 15:10:01 crc kubenswrapper[5107]: I1209 15:10:01.625091 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e0de03a-453d-4ba7-97d0-654ab54ca200-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "2e0de03a-453d-4ba7-97d0-654ab54ca200" (UID: "2e0de03a-453d-4ba7-97d0-654ab54ca200"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:10:01 crc kubenswrapper[5107]: I1209 15:10:01.625214 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e0de03a-453d-4ba7-97d0-654ab54ca200-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "2e0de03a-453d-4ba7-97d0-654ab54ca200" (UID: "2e0de03a-453d-4ba7-97d0-654ab54ca200"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 15:10:01 crc kubenswrapper[5107]: I1209 15:10:01.625446 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e0de03a-453d-4ba7-97d0-654ab54ca200-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "2e0de03a-453d-4ba7-97d0-654ab54ca200" (UID: "2e0de03a-453d-4ba7-97d0-654ab54ca200"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:10:01 crc kubenswrapper[5107]: I1209 15:10:01.625465 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e0de03a-453d-4ba7-97d0-654ab54ca200-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "2e0de03a-453d-4ba7-97d0-654ab54ca200" (UID: "2e0de03a-453d-4ba7-97d0-654ab54ca200"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:10:01 crc kubenswrapper[5107]: I1209 15:10:01.625979 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e0de03a-453d-4ba7-97d0-654ab54ca200-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "2e0de03a-453d-4ba7-97d0-654ab54ca200" (UID: "2e0de03a-453d-4ba7-97d0-654ab54ca200"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 15:10:01 crc kubenswrapper[5107]: I1209 15:10:01.626377 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e0de03a-453d-4ba7-97d0-654ab54ca200-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "2e0de03a-453d-4ba7-97d0-654ab54ca200" (UID: "2e0de03a-453d-4ba7-97d0-654ab54ca200"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 15:10:01 crc kubenswrapper[5107]: I1209 15:10:01.626400 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e0de03a-453d-4ba7-97d0-654ab54ca200-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "2e0de03a-453d-4ba7-97d0-654ab54ca200" (UID: "2e0de03a-453d-4ba7-97d0-654ab54ca200"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 15:10:01 crc kubenswrapper[5107]: I1209 15:10:01.626469 5107 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2e0de03a-453d-4ba7-97d0-654ab54ca200-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 09 15:10:01 crc kubenswrapper[5107]: I1209 15:10:01.626480 5107 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2e0de03a-453d-4ba7-97d0-654ab54ca200-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 09 15:10:01 crc kubenswrapper[5107]: I1209 15:10:01.626506 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2e0de03a-453d-4ba7-97d0-654ab54ca200-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 09 15:10:01 crc kubenswrapper[5107]: I1209 15:10:01.626515 5107 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2e0de03a-453d-4ba7-97d0-654ab54ca200-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 09 15:10:01 crc kubenswrapper[5107]: I1209 15:10:01.626523 5107 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2e0de03a-453d-4ba7-97d0-654ab54ca200-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 09 15:10:01 crc kubenswrapper[5107]: I1209 15:10:01.626530 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2e0de03a-453d-4ba7-97d0-654ab54ca200-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 09 15:10:01 crc kubenswrapper[5107]: I1209 15:10:01.632156 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e0de03a-453d-4ba7-97d0-654ab54ca200-builder-dockercfg-9pqzk-push" (OuterVolumeSpecName: "builder-dockercfg-9pqzk-push") pod "2e0de03a-453d-4ba7-97d0-654ab54ca200" (UID: "2e0de03a-453d-4ba7-97d0-654ab54ca200"). InnerVolumeSpecName "builder-dockercfg-9pqzk-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 15:10:01 crc kubenswrapper[5107]: I1209 15:10:01.632217 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e0de03a-453d-4ba7-97d0-654ab54ca200-kube-api-access-jbzsw" (OuterVolumeSpecName: "kube-api-access-jbzsw") pod "2e0de03a-453d-4ba7-97d0-654ab54ca200" (UID: "2e0de03a-453d-4ba7-97d0-654ab54ca200"). InnerVolumeSpecName "kube-api-access-jbzsw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 15:10:01 crc kubenswrapper[5107]: I1209 15:10:01.632273 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e0de03a-453d-4ba7-97d0-654ab54ca200-builder-dockercfg-9pqzk-pull" (OuterVolumeSpecName: "builder-dockercfg-9pqzk-pull") pod "2e0de03a-453d-4ba7-97d0-654ab54ca200" (UID: "2e0de03a-453d-4ba7-97d0-654ab54ca200"). InnerVolumeSpecName "builder-dockercfg-9pqzk-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 15:10:01 crc kubenswrapper[5107]: I1209 15:10:01.727684 5107 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2e0de03a-453d-4ba7-97d0-654ab54ca200-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 09 15:10:01 crc kubenswrapper[5107]: I1209 15:10:01.727741 5107 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e0de03a-453d-4ba7-97d0-654ab54ca200-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 15:10:01 crc kubenswrapper[5107]: I1209 15:10:01.727753 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-9pqzk-push\" (UniqueName: \"kubernetes.io/secret/2e0de03a-453d-4ba7-97d0-654ab54ca200-builder-dockercfg-9pqzk-push\") on node \"crc\" DevicePath \"\"" Dec 09 15:10:01 crc kubenswrapper[5107]: I1209 15:10:01.727761 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jbzsw\" (UniqueName: \"kubernetes.io/projected/2e0de03a-453d-4ba7-97d0-654ab54ca200-kube-api-access-jbzsw\") on node \"crc\" DevicePath \"\"" Dec 09 15:10:01 crc kubenswrapper[5107]: I1209 15:10:01.727770 5107 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e0de03a-453d-4ba7-97d0-654ab54ca200-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 15:10:01 crc kubenswrapper[5107]: I1209 15:10:01.727779 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-9pqzk-pull\" (UniqueName: \"kubernetes.io/secret/2e0de03a-453d-4ba7-97d0-654ab54ca200-builder-dockercfg-9pqzk-pull\") on node \"crc\" DevicePath \"\"" Dec 09 15:10:02 crc kubenswrapper[5107]: I1209 15:10:02.122637 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_2e0de03a-453d-4ba7-97d0-654ab54ca200/git-clone/0.log" Dec 09 15:10:02 crc kubenswrapper[5107]: I1209 15:10:02.122697 5107 generic.go:358] "Generic (PLEG): container finished" podID="2e0de03a-453d-4ba7-97d0-654ab54ca200" containerID="332bba27e709ff3cfabcff4575dd2e0ec5fd10a14474101dc9072537fea12873" exitCode=1 Dec 09 15:10:02 crc kubenswrapper[5107]: I1209 15:10:02.122758 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"2e0de03a-453d-4ba7-97d0-654ab54ca200","Type":"ContainerDied","Data":"332bba27e709ff3cfabcff4575dd2e0ec5fd10a14474101dc9072537fea12873"} Dec 09 15:10:02 crc kubenswrapper[5107]: I1209 15:10:02.122776 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 15:10:02 crc kubenswrapper[5107]: I1209 15:10:02.122789 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"2e0de03a-453d-4ba7-97d0-654ab54ca200","Type":"ContainerDied","Data":"7f1a3319fbdb6a1f3a4f7ef73096e224829535c4d8d8656a0d7daad7cdc95e7b"} Dec 09 15:10:02 crc kubenswrapper[5107]: I1209 15:10:02.122808 5107 scope.go:117] "RemoveContainer" containerID="332bba27e709ff3cfabcff4575dd2e0ec5fd10a14474101dc9072537fea12873" Dec 09 15:10:02 crc kubenswrapper[5107]: I1209 15:10:02.142870 5107 scope.go:117] "RemoveContainer" containerID="332bba27e709ff3cfabcff4575dd2e0ec5fd10a14474101dc9072537fea12873" Dec 09 15:10:02 crc kubenswrapper[5107]: E1209 15:10:02.143620 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"332bba27e709ff3cfabcff4575dd2e0ec5fd10a14474101dc9072537fea12873\": container with ID starting with 332bba27e709ff3cfabcff4575dd2e0ec5fd10a14474101dc9072537fea12873 not found: ID does not exist" containerID="332bba27e709ff3cfabcff4575dd2e0ec5fd10a14474101dc9072537fea12873" Dec 09 15:10:02 crc kubenswrapper[5107]: I1209 15:10:02.143678 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"332bba27e709ff3cfabcff4575dd2e0ec5fd10a14474101dc9072537fea12873"} err="failed to get container status \"332bba27e709ff3cfabcff4575dd2e0ec5fd10a14474101dc9072537fea12873\": rpc error: code = NotFound desc = could not find container \"332bba27e709ff3cfabcff4575dd2e0ec5fd10a14474101dc9072537fea12873\": container with ID starting with 332bba27e709ff3cfabcff4575dd2e0ec5fd10a14474101dc9072537fea12873 not found: ID does not exist" Dec 09 15:10:02 crc kubenswrapper[5107]: I1209 15:10:02.168115 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 09 15:10:02 crc kubenswrapper[5107]: I1209 15:10:02.174305 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 09 15:10:02 crc kubenswrapper[5107]: I1209 15:10:02.824861 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e0de03a-453d-4ba7-97d0-654ab54ca200" path="/var/lib/kubelet/pods/2e0de03a-453d-4ba7-97d0-654ab54ca200/volumes" Dec 09 15:10:11 crc kubenswrapper[5107]: I1209 15:10:11.686542 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 09 15:10:11 crc kubenswrapper[5107]: I1209 15:10:11.687729 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2e0de03a-453d-4ba7-97d0-654ab54ca200" containerName="git-clone" Dec 09 15:10:11 crc kubenswrapper[5107]: I1209 15:10:11.687746 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e0de03a-453d-4ba7-97d0-654ab54ca200" containerName="git-clone" Dec 09 15:10:11 crc kubenswrapper[5107]: I1209 15:10:11.687912 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="2e0de03a-453d-4ba7-97d0-654ab54ca200" containerName="git-clone" Dec 09 15:10:12 crc kubenswrapper[5107]: I1209 15:10:12.242089 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 15:10:12 crc kubenswrapper[5107]: I1209 15:10:12.242872 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 09 15:10:12 crc kubenswrapper[5107]: I1209 15:10:12.245451 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-9pqzk\"" Dec 09 15:10:12 crc kubenswrapper[5107]: I1209 15:10:12.245463 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-4-global-ca\"" Dec 09 15:10:12 crc kubenswrapper[5107]: I1209 15:10:12.246163 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-4-ca\"" Dec 09 15:10:12 crc kubenswrapper[5107]: I1209 15:10:12.246256 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-4-sys-config\"" Dec 09 15:10:12 crc kubenswrapper[5107]: I1209 15:10:12.372323 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/28970b70-763e-418b-b123-e3a686819839-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 15:10:12 crc kubenswrapper[5107]: I1209 15:10:12.372385 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/28970b70-763e-418b-b123-e3a686819839-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 15:10:12 crc kubenswrapper[5107]: I1209 15:10:12.372419 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/28970b70-763e-418b-b123-e3a686819839-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 15:10:12 crc kubenswrapper[5107]: I1209 15:10:12.372445 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-9pqzk-push\" (UniqueName: \"kubernetes.io/secret/28970b70-763e-418b-b123-e3a686819839-builder-dockercfg-9pqzk-push\") pod \"service-telemetry-operator-4-build\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 15:10:12 crc kubenswrapper[5107]: I1209 15:10:12.372460 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-9pqzk-pull\" (UniqueName: \"kubernetes.io/secret/28970b70-763e-418b-b123-e3a686819839-builder-dockercfg-9pqzk-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 15:10:12 crc kubenswrapper[5107]: I1209 15:10:12.372475 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/28970b70-763e-418b-b123-e3a686819839-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 15:10:12 crc kubenswrapper[5107]: I1209 15:10:12.372489 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtbkm\" (UniqueName: \"kubernetes.io/projected/28970b70-763e-418b-b123-e3a686819839-kube-api-access-vtbkm\") pod \"service-telemetry-operator-4-build\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 15:10:12 crc kubenswrapper[5107]: I1209 15:10:12.372531 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/28970b70-763e-418b-b123-e3a686819839-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 15:10:12 crc kubenswrapper[5107]: I1209 15:10:12.372561 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/28970b70-763e-418b-b123-e3a686819839-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 15:10:12 crc kubenswrapper[5107]: I1209 15:10:12.372588 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/28970b70-763e-418b-b123-e3a686819839-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 15:10:12 crc kubenswrapper[5107]: I1209 15:10:12.372641 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/28970b70-763e-418b-b123-e3a686819839-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 15:10:12 crc kubenswrapper[5107]: I1209 15:10:12.372690 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/28970b70-763e-418b-b123-e3a686819839-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 15:10:12 crc kubenswrapper[5107]: I1209 15:10:12.473710 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/28970b70-763e-418b-b123-e3a686819839-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 15:10:12 crc kubenswrapper[5107]: I1209 15:10:12.473766 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-9pqzk-push\" (UniqueName: \"kubernetes.io/secret/28970b70-763e-418b-b123-e3a686819839-builder-dockercfg-9pqzk-push\") pod \"service-telemetry-operator-4-build\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 15:10:12 crc kubenswrapper[5107]: I1209 15:10:12.473783 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-9pqzk-pull\" (UniqueName: \"kubernetes.io/secret/28970b70-763e-418b-b123-e3a686819839-builder-dockercfg-9pqzk-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 15:10:12 crc kubenswrapper[5107]: I1209 15:10:12.473798 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/28970b70-763e-418b-b123-e3a686819839-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 15:10:12 crc kubenswrapper[5107]: I1209 15:10:12.473903 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vtbkm\" (UniqueName: \"kubernetes.io/projected/28970b70-763e-418b-b123-e3a686819839-kube-api-access-vtbkm\") pod \"service-telemetry-operator-4-build\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 15:10:12 crc kubenswrapper[5107]: I1209 15:10:12.473933 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/28970b70-763e-418b-b123-e3a686819839-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 15:10:12 crc kubenswrapper[5107]: I1209 15:10:12.473955 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/28970b70-763e-418b-b123-e3a686819839-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 15:10:12 crc kubenswrapper[5107]: I1209 15:10:12.473982 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/28970b70-763e-418b-b123-e3a686819839-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 15:10:12 crc kubenswrapper[5107]: I1209 15:10:12.474011 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/28970b70-763e-418b-b123-e3a686819839-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 15:10:12 crc kubenswrapper[5107]: I1209 15:10:12.474030 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/28970b70-763e-418b-b123-e3a686819839-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 15:10:12 crc kubenswrapper[5107]: I1209 15:10:12.474077 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/28970b70-763e-418b-b123-e3a686819839-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 15:10:12 crc kubenswrapper[5107]: I1209 15:10:12.474095 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/28970b70-763e-418b-b123-e3a686819839-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 15:10:12 crc kubenswrapper[5107]: I1209 15:10:12.474216 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/28970b70-763e-418b-b123-e3a686819839-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 15:10:12 crc kubenswrapper[5107]: I1209 15:10:12.474499 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/28970b70-763e-418b-b123-e3a686819839-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 15:10:12 crc kubenswrapper[5107]: I1209 15:10:12.474735 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/28970b70-763e-418b-b123-e3a686819839-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 15:10:12 crc kubenswrapper[5107]: I1209 15:10:12.474862 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/28970b70-763e-418b-b123-e3a686819839-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 15:10:12 crc kubenswrapper[5107]: I1209 15:10:12.474897 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/28970b70-763e-418b-b123-e3a686819839-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 15:10:12 crc kubenswrapper[5107]: I1209 15:10:12.474952 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/28970b70-763e-418b-b123-e3a686819839-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 15:10:12 crc kubenswrapper[5107]: I1209 15:10:12.474992 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/28970b70-763e-418b-b123-e3a686819839-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 15:10:12 crc kubenswrapper[5107]: I1209 15:10:12.475039 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/28970b70-763e-418b-b123-e3a686819839-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 15:10:12 crc kubenswrapper[5107]: I1209 15:10:12.476164 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/28970b70-763e-418b-b123-e3a686819839-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 15:10:12 crc kubenswrapper[5107]: I1209 15:10:12.481868 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-9pqzk-pull\" (UniqueName: \"kubernetes.io/secret/28970b70-763e-418b-b123-e3a686819839-builder-dockercfg-9pqzk-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 15:10:12 crc kubenswrapper[5107]: I1209 15:10:12.481881 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-9pqzk-push\" (UniqueName: \"kubernetes.io/secret/28970b70-763e-418b-b123-e3a686819839-builder-dockercfg-9pqzk-push\") pod \"service-telemetry-operator-4-build\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 15:10:12 crc kubenswrapper[5107]: I1209 15:10:12.494121 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtbkm\" (UniqueName: \"kubernetes.io/projected/28970b70-763e-418b-b123-e3a686819839-kube-api-access-vtbkm\") pod \"service-telemetry-operator-4-build\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 15:10:12 crc kubenswrapper[5107]: I1209 15:10:12.568527 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 15:10:12 crc kubenswrapper[5107]: I1209 15:10:12.805551 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 09 15:10:13 crc kubenswrapper[5107]: I1209 15:10:13.529296 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"28970b70-763e-418b-b123-e3a686819839","Type":"ContainerStarted","Data":"0e192331535fecb57da38ba649b9332026c3deda04ca03b6c48c5c85c3d7bfc1"} Dec 09 15:10:13 crc kubenswrapper[5107]: I1209 15:10:13.530091 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"28970b70-763e-418b-b123-e3a686819839","Type":"ContainerStarted","Data":"458dd37fe86a63a1f5f0a1950a94567162f921f1ae43825c09d45ff64ded3c9c"} Dec 09 15:10:13 crc kubenswrapper[5107]: I1209 15:10:13.582766 5107 ???:1] "http: TLS handshake error from 192.168.126.11:50564: no serving certificate available for the kubelet" Dec 09 15:10:14 crc kubenswrapper[5107]: I1209 15:10:14.613797 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 09 15:10:16 crc kubenswrapper[5107]: I1209 15:10:16.548699 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-4-build" podUID="28970b70-763e-418b-b123-e3a686819839" containerName="git-clone" containerID="cri-o://0e192331535fecb57da38ba649b9332026c3deda04ca03b6c48c5c85c3d7bfc1" gracePeriod=30 Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.007718 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_28970b70-763e-418b-b123-e3a686819839/git-clone/0.log" Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.008063 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.149585 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtbkm\" (UniqueName: \"kubernetes.io/projected/28970b70-763e-418b-b123-e3a686819839-kube-api-access-vtbkm\") pod \"28970b70-763e-418b-b123-e3a686819839\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.149658 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/28970b70-763e-418b-b123-e3a686819839-buildcachedir\") pod \"28970b70-763e-418b-b123-e3a686819839\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.149706 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/28970b70-763e-418b-b123-e3a686819839-container-storage-root\") pod \"28970b70-763e-418b-b123-e3a686819839\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.149903 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-9pqzk-push\" (UniqueName: \"kubernetes.io/secret/28970b70-763e-418b-b123-e3a686819839-builder-dockercfg-9pqzk-push\") pod \"28970b70-763e-418b-b123-e3a686819839\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.149938 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/28970b70-763e-418b-b123-e3a686819839-node-pullsecrets\") pod \"28970b70-763e-418b-b123-e3a686819839\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.149964 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/28970b70-763e-418b-b123-e3a686819839-build-ca-bundles\") pod \"28970b70-763e-418b-b123-e3a686819839\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.149990 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/28970b70-763e-418b-b123-e3a686819839-build-blob-cache\") pod \"28970b70-763e-418b-b123-e3a686819839\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.150022 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/28970b70-763e-418b-b123-e3a686819839-build-proxy-ca-bundles\") pod \"28970b70-763e-418b-b123-e3a686819839\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.150042 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-9pqzk-pull\" (UniqueName: \"kubernetes.io/secret/28970b70-763e-418b-b123-e3a686819839-builder-dockercfg-9pqzk-pull\") pod \"28970b70-763e-418b-b123-e3a686819839\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.150065 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/28970b70-763e-418b-b123-e3a686819839-build-system-configs\") pod \"28970b70-763e-418b-b123-e3a686819839\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.150095 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/28970b70-763e-418b-b123-e3a686819839-buildworkdir\") pod \"28970b70-763e-418b-b123-e3a686819839\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.150112 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/28970b70-763e-418b-b123-e3a686819839-container-storage-run\") pod \"28970b70-763e-418b-b123-e3a686819839\" (UID: \"28970b70-763e-418b-b123-e3a686819839\") " Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.150839 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28970b70-763e-418b-b123-e3a686819839-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "28970b70-763e-418b-b123-e3a686819839" (UID: "28970b70-763e-418b-b123-e3a686819839"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.150912 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28970b70-763e-418b-b123-e3a686819839-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "28970b70-763e-418b-b123-e3a686819839" (UID: "28970b70-763e-418b-b123-e3a686819839"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.150940 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28970b70-763e-418b-b123-e3a686819839-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "28970b70-763e-418b-b123-e3a686819839" (UID: "28970b70-763e-418b-b123-e3a686819839"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.150974 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28970b70-763e-418b-b123-e3a686819839-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "28970b70-763e-418b-b123-e3a686819839" (UID: "28970b70-763e-418b-b123-e3a686819839"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.151706 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28970b70-763e-418b-b123-e3a686819839-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "28970b70-763e-418b-b123-e3a686819839" (UID: "28970b70-763e-418b-b123-e3a686819839"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.151756 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28970b70-763e-418b-b123-e3a686819839-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "28970b70-763e-418b-b123-e3a686819839" (UID: "28970b70-763e-418b-b123-e3a686819839"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.152450 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28970b70-763e-418b-b123-e3a686819839-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "28970b70-763e-418b-b123-e3a686819839" (UID: "28970b70-763e-418b-b123-e3a686819839"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.152704 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28970b70-763e-418b-b123-e3a686819839-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "28970b70-763e-418b-b123-e3a686819839" (UID: "28970b70-763e-418b-b123-e3a686819839"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.152909 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28970b70-763e-418b-b123-e3a686819839-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "28970b70-763e-418b-b123-e3a686819839" (UID: "28970b70-763e-418b-b123-e3a686819839"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.156530 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28970b70-763e-418b-b123-e3a686819839-builder-dockercfg-9pqzk-push" (OuterVolumeSpecName: "builder-dockercfg-9pqzk-push") pod "28970b70-763e-418b-b123-e3a686819839" (UID: "28970b70-763e-418b-b123-e3a686819839"). InnerVolumeSpecName "builder-dockercfg-9pqzk-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.156750 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28970b70-763e-418b-b123-e3a686819839-builder-dockercfg-9pqzk-pull" (OuterVolumeSpecName: "builder-dockercfg-9pqzk-pull") pod "28970b70-763e-418b-b123-e3a686819839" (UID: "28970b70-763e-418b-b123-e3a686819839"). InnerVolumeSpecName "builder-dockercfg-9pqzk-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.162604 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28970b70-763e-418b-b123-e3a686819839-kube-api-access-vtbkm" (OuterVolumeSpecName: "kube-api-access-vtbkm") pod "28970b70-763e-418b-b123-e3a686819839" (UID: "28970b70-763e-418b-b123-e3a686819839"). InnerVolumeSpecName "kube-api-access-vtbkm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.251968 5107 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/28970b70-763e-418b-b123-e3a686819839-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.252061 5107 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/28970b70-763e-418b-b123-e3a686819839-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.252082 5107 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/28970b70-763e-418b-b123-e3a686819839-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.252100 5107 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/28970b70-763e-418b-b123-e3a686819839-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.252119 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-9pqzk-pull\" (UniqueName: \"kubernetes.io/secret/28970b70-763e-418b-b123-e3a686819839-builder-dockercfg-9pqzk-pull\") on node \"crc\" DevicePath \"\"" Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.252136 5107 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/28970b70-763e-418b-b123-e3a686819839-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.252153 5107 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/28970b70-763e-418b-b123-e3a686819839-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.252169 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/28970b70-763e-418b-b123-e3a686819839-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.252185 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vtbkm\" (UniqueName: \"kubernetes.io/projected/28970b70-763e-418b-b123-e3a686819839-kube-api-access-vtbkm\") on node \"crc\" DevicePath \"\"" Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.252202 5107 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/28970b70-763e-418b-b123-e3a686819839-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.252217 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/28970b70-763e-418b-b123-e3a686819839-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.252233 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-9pqzk-push\" (UniqueName: \"kubernetes.io/secret/28970b70-763e-418b-b123-e3a686819839-builder-dockercfg-9pqzk-push\") on node \"crc\" DevicePath \"\"" Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.555113 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_28970b70-763e-418b-b123-e3a686819839/git-clone/0.log" Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.555156 5107 generic.go:358] "Generic (PLEG): container finished" podID="28970b70-763e-418b-b123-e3a686819839" containerID="0e192331535fecb57da38ba649b9332026c3deda04ca03b6c48c5c85c3d7bfc1" exitCode=1 Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.555460 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"28970b70-763e-418b-b123-e3a686819839","Type":"ContainerDied","Data":"0e192331535fecb57da38ba649b9332026c3deda04ca03b6c48c5c85c3d7bfc1"} Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.555528 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"28970b70-763e-418b-b123-e3a686819839","Type":"ContainerDied","Data":"458dd37fe86a63a1f5f0a1950a94567162f921f1ae43825c09d45ff64ded3c9c"} Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.555546 5107 scope.go:117] "RemoveContainer" containerID="0e192331535fecb57da38ba649b9332026c3deda04ca03b6c48c5c85c3d7bfc1" Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.555505 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.582818 5107 scope.go:117] "RemoveContainer" containerID="0e192331535fecb57da38ba649b9332026c3deda04ca03b6c48c5c85c3d7bfc1" Dec 09 15:10:17 crc kubenswrapper[5107]: E1209 15:10:17.584274 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e192331535fecb57da38ba649b9332026c3deda04ca03b6c48c5c85c3d7bfc1\": container with ID starting with 0e192331535fecb57da38ba649b9332026c3deda04ca03b6c48c5c85c3d7bfc1 not found: ID does not exist" containerID="0e192331535fecb57da38ba649b9332026c3deda04ca03b6c48c5c85c3d7bfc1" Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.585698 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e192331535fecb57da38ba649b9332026c3deda04ca03b6c48c5c85c3d7bfc1"} err="failed to get container status \"0e192331535fecb57da38ba649b9332026c3deda04ca03b6c48c5c85c3d7bfc1\": rpc error: code = NotFound desc = could not find container \"0e192331535fecb57da38ba649b9332026c3deda04ca03b6c48c5c85c3d7bfc1\": container with ID starting with 0e192331535fecb57da38ba649b9332026c3deda04ca03b6c48c5c85c3d7bfc1 not found: ID does not exist" Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.603979 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 09 15:10:17 crc kubenswrapper[5107]: I1209 15:10:17.608414 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 09 15:10:18 crc kubenswrapper[5107]: I1209 15:10:18.825738 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28970b70-763e-418b-b123-e3a686819839" path="/var/lib/kubelet/pods/28970b70-763e-418b-b123-e3a686819839/volumes" Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.141050 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.142392 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="28970b70-763e-418b-b123-e3a686819839" containerName="git-clone" Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.142411 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="28970b70-763e-418b-b123-e3a686819839" containerName="git-clone" Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.142554 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="28970b70-763e-418b-b123-e3a686819839" containerName="git-clone" Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.147091 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.149895 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-5-global-ca\"" Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.149895 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-5-sys-config\"" Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.149960 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-5-ca\"" Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.150296 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-9pqzk\"" Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.165808 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.271100 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/32f27b64-f857-4de1-a94b-69e9f6cde779-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.271184 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/32f27b64-f857-4de1-a94b-69e9f6cde779-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.271205 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/32f27b64-f857-4de1-a94b-69e9f6cde779-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.271225 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/32f27b64-f857-4de1-a94b-69e9f6cde779-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.271253 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/32f27b64-f857-4de1-a94b-69e9f6cde779-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.271269 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-9pqzk-push\" (UniqueName: \"kubernetes.io/secret/32f27b64-f857-4de1-a94b-69e9f6cde779-builder-dockercfg-9pqzk-push\") pod \"service-telemetry-operator-5-build\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.271285 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-9pqzk-pull\" (UniqueName: \"kubernetes.io/secret/32f27b64-f857-4de1-a94b-69e9f6cde779-builder-dockercfg-9pqzk-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.271301 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/32f27b64-f857-4de1-a94b-69e9f6cde779-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.271372 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/32f27b64-f857-4de1-a94b-69e9f6cde779-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.271430 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2v2z\" (UniqueName: \"kubernetes.io/projected/32f27b64-f857-4de1-a94b-69e9f6cde779-kube-api-access-s2v2z\") pod \"service-telemetry-operator-5-build\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.271445 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/32f27b64-f857-4de1-a94b-69e9f6cde779-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.271460 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/32f27b64-f857-4de1-a94b-69e9f6cde779-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.372874 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/32f27b64-f857-4de1-a94b-69e9f6cde779-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.372943 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/32f27b64-f857-4de1-a94b-69e9f6cde779-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.372969 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/32f27b64-f857-4de1-a94b-69e9f6cde779-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.373093 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/32f27b64-f857-4de1-a94b-69e9f6cde779-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.373274 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/32f27b64-f857-4de1-a94b-69e9f6cde779-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.373276 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/32f27b64-f857-4de1-a94b-69e9f6cde779-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.373314 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-9pqzk-push\" (UniqueName: \"kubernetes.io/secret/32f27b64-f857-4de1-a94b-69e9f6cde779-builder-dockercfg-9pqzk-push\") pod \"service-telemetry-operator-5-build\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.373427 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-9pqzk-pull\" (UniqueName: \"kubernetes.io/secret/32f27b64-f857-4de1-a94b-69e9f6cde779-builder-dockercfg-9pqzk-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.373486 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/32f27b64-f857-4de1-a94b-69e9f6cde779-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.373555 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/32f27b64-f857-4de1-a94b-69e9f6cde779-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.373698 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/32f27b64-f857-4de1-a94b-69e9f6cde779-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.373779 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/32f27b64-f857-4de1-a94b-69e9f6cde779-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.373788 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s2v2z\" (UniqueName: \"kubernetes.io/projected/32f27b64-f857-4de1-a94b-69e9f6cde779-kube-api-access-s2v2z\") pod \"service-telemetry-operator-5-build\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.373875 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/32f27b64-f857-4de1-a94b-69e9f6cde779-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.373938 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/32f27b64-f857-4de1-a94b-69e9f6cde779-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.374077 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/32f27b64-f857-4de1-a94b-69e9f6cde779-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.374268 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/32f27b64-f857-4de1-a94b-69e9f6cde779-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.374537 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/32f27b64-f857-4de1-a94b-69e9f6cde779-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.375234 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/32f27b64-f857-4de1-a94b-69e9f6cde779-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.375898 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/32f27b64-f857-4de1-a94b-69e9f6cde779-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.376108 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/32f27b64-f857-4de1-a94b-69e9f6cde779-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.380939 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-9pqzk-pull\" (UniqueName: \"kubernetes.io/secret/32f27b64-f857-4de1-a94b-69e9f6cde779-builder-dockercfg-9pqzk-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.380956 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-9pqzk-push\" (UniqueName: \"kubernetes.io/secret/32f27b64-f857-4de1-a94b-69e9f6cde779-builder-dockercfg-9pqzk-push\") pod \"service-telemetry-operator-5-build\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.393561 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2v2z\" (UniqueName: \"kubernetes.io/projected/32f27b64-f857-4de1-a94b-69e9f6cde779-kube-api-access-s2v2z\") pod \"service-telemetry-operator-5-build\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.471396 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 15:10:26 crc kubenswrapper[5107]: I1209 15:10:26.684775 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 09 15:10:27 crc kubenswrapper[5107]: I1209 15:10:27.624389 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"32f27b64-f857-4de1-a94b-69e9f6cde779","Type":"ContainerStarted","Data":"f9c2b220161259c590aa5aeeee426e6580973936422b6e811727eb6b7cdd94d3"} Dec 09 15:10:27 crc kubenswrapper[5107]: I1209 15:10:27.624469 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"32f27b64-f857-4de1-a94b-69e9f6cde779","Type":"ContainerStarted","Data":"3e7b964b95c5455d6228ce2d5f8b5916191f1939e26b6add495fdaca9438e0ca"} Dec 09 15:10:27 crc kubenswrapper[5107]: I1209 15:10:27.676792 5107 ???:1] "http: TLS handshake error from 192.168.126.11:55838: no serving certificate available for the kubelet" Dec 09 15:10:28 crc kubenswrapper[5107]: I1209 15:10:28.709383 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 09 15:10:29 crc kubenswrapper[5107]: I1209 15:10:29.637156 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-5-build" podUID="32f27b64-f857-4de1-a94b-69e9f6cde779" containerName="git-clone" containerID="cri-o://f9c2b220161259c590aa5aeeee426e6580973936422b6e811727eb6b7cdd94d3" gracePeriod=30 Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.023185 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_32f27b64-f857-4de1-a94b-69e9f6cde779/git-clone/0.log" Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.023575 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.127352 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2v2z\" (UniqueName: \"kubernetes.io/projected/32f27b64-f857-4de1-a94b-69e9f6cde779-kube-api-access-s2v2z\") pod \"32f27b64-f857-4de1-a94b-69e9f6cde779\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.127821 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/32f27b64-f857-4de1-a94b-69e9f6cde779-build-proxy-ca-bundles\") pod \"32f27b64-f857-4de1-a94b-69e9f6cde779\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.127870 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/32f27b64-f857-4de1-a94b-69e9f6cde779-build-system-configs\") pod \"32f27b64-f857-4de1-a94b-69e9f6cde779\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.127901 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/32f27b64-f857-4de1-a94b-69e9f6cde779-buildworkdir\") pod \"32f27b64-f857-4de1-a94b-69e9f6cde779\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.127989 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/32f27b64-f857-4de1-a94b-69e9f6cde779-build-ca-bundles\") pod \"32f27b64-f857-4de1-a94b-69e9f6cde779\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.128031 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-9pqzk-pull\" (UniqueName: \"kubernetes.io/secret/32f27b64-f857-4de1-a94b-69e9f6cde779-builder-dockercfg-9pqzk-pull\") pod \"32f27b64-f857-4de1-a94b-69e9f6cde779\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.128121 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/32f27b64-f857-4de1-a94b-69e9f6cde779-build-blob-cache\") pod \"32f27b64-f857-4de1-a94b-69e9f6cde779\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.128161 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-9pqzk-push\" (UniqueName: \"kubernetes.io/secret/32f27b64-f857-4de1-a94b-69e9f6cde779-builder-dockercfg-9pqzk-push\") pod \"32f27b64-f857-4de1-a94b-69e9f6cde779\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.128192 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/32f27b64-f857-4de1-a94b-69e9f6cde779-buildcachedir\") pod \"32f27b64-f857-4de1-a94b-69e9f6cde779\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.128242 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/32f27b64-f857-4de1-a94b-69e9f6cde779-node-pullsecrets\") pod \"32f27b64-f857-4de1-a94b-69e9f6cde779\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.128267 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/32f27b64-f857-4de1-a94b-69e9f6cde779-container-storage-run\") pod \"32f27b64-f857-4de1-a94b-69e9f6cde779\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.128326 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/32f27b64-f857-4de1-a94b-69e9f6cde779-container-storage-root\") pod \"32f27b64-f857-4de1-a94b-69e9f6cde779\" (UID: \"32f27b64-f857-4de1-a94b-69e9f6cde779\") " Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.128488 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32f27b64-f857-4de1-a94b-69e9f6cde779-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "32f27b64-f857-4de1-a94b-69e9f6cde779" (UID: "32f27b64-f857-4de1-a94b-69e9f6cde779"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.128537 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32f27b64-f857-4de1-a94b-69e9f6cde779-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "32f27b64-f857-4de1-a94b-69e9f6cde779" (UID: "32f27b64-f857-4de1-a94b-69e9f6cde779"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.128893 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32f27b64-f857-4de1-a94b-69e9f6cde779-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "32f27b64-f857-4de1-a94b-69e9f6cde779" (UID: "32f27b64-f857-4de1-a94b-69e9f6cde779"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.128933 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32f27b64-f857-4de1-a94b-69e9f6cde779-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "32f27b64-f857-4de1-a94b-69e9f6cde779" (UID: "32f27b64-f857-4de1-a94b-69e9f6cde779"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.128962 5107 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/32f27b64-f857-4de1-a94b-69e9f6cde779-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.128962 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32f27b64-f857-4de1-a94b-69e9f6cde779-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "32f27b64-f857-4de1-a94b-69e9f6cde779" (UID: "32f27b64-f857-4de1-a94b-69e9f6cde779"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.128986 5107 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/32f27b64-f857-4de1-a94b-69e9f6cde779-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.128950 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32f27b64-f857-4de1-a94b-69e9f6cde779-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "32f27b64-f857-4de1-a94b-69e9f6cde779" (UID: "32f27b64-f857-4de1-a94b-69e9f6cde779"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.129317 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32f27b64-f857-4de1-a94b-69e9f6cde779-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "32f27b64-f857-4de1-a94b-69e9f6cde779" (UID: "32f27b64-f857-4de1-a94b-69e9f6cde779"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.129528 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32f27b64-f857-4de1-a94b-69e9f6cde779-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "32f27b64-f857-4de1-a94b-69e9f6cde779" (UID: "32f27b64-f857-4de1-a94b-69e9f6cde779"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.130379 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32f27b64-f857-4de1-a94b-69e9f6cde779-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "32f27b64-f857-4de1-a94b-69e9f6cde779" (UID: "32f27b64-f857-4de1-a94b-69e9f6cde779"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.134430 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32f27b64-f857-4de1-a94b-69e9f6cde779-kube-api-access-s2v2z" (OuterVolumeSpecName: "kube-api-access-s2v2z") pod "32f27b64-f857-4de1-a94b-69e9f6cde779" (UID: "32f27b64-f857-4de1-a94b-69e9f6cde779"). InnerVolumeSpecName "kube-api-access-s2v2z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.134432 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32f27b64-f857-4de1-a94b-69e9f6cde779-builder-dockercfg-9pqzk-pull" (OuterVolumeSpecName: "builder-dockercfg-9pqzk-pull") pod "32f27b64-f857-4de1-a94b-69e9f6cde779" (UID: "32f27b64-f857-4de1-a94b-69e9f6cde779"). InnerVolumeSpecName "builder-dockercfg-9pqzk-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.135008 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32f27b64-f857-4de1-a94b-69e9f6cde779-builder-dockercfg-9pqzk-push" (OuterVolumeSpecName: "builder-dockercfg-9pqzk-push") pod "32f27b64-f857-4de1-a94b-69e9f6cde779" (UID: "32f27b64-f857-4de1-a94b-69e9f6cde779"). InnerVolumeSpecName "builder-dockercfg-9pqzk-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.230704 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s2v2z\" (UniqueName: \"kubernetes.io/projected/32f27b64-f857-4de1-a94b-69e9f6cde779-kube-api-access-s2v2z\") on node \"crc\" DevicePath \"\"" Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.230761 5107 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/32f27b64-f857-4de1-a94b-69e9f6cde779-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.230774 5107 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/32f27b64-f857-4de1-a94b-69e9f6cde779-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.230787 5107 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/32f27b64-f857-4de1-a94b-69e9f6cde779-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.230799 5107 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/32f27b64-f857-4de1-a94b-69e9f6cde779-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.230809 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-9pqzk-pull\" (UniqueName: \"kubernetes.io/secret/32f27b64-f857-4de1-a94b-69e9f6cde779-builder-dockercfg-9pqzk-pull\") on node \"crc\" DevicePath \"\"" Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.230821 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-9pqzk-push\" (UniqueName: \"kubernetes.io/secret/32f27b64-f857-4de1-a94b-69e9f6cde779-builder-dockercfg-9pqzk-push\") on node \"crc\" DevicePath \"\"" Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.230832 5107 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/32f27b64-f857-4de1-a94b-69e9f6cde779-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.230844 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/32f27b64-f857-4de1-a94b-69e9f6cde779-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.230856 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/32f27b64-f857-4de1-a94b-69e9f6cde779-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.646560 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_32f27b64-f857-4de1-a94b-69e9f6cde779/git-clone/0.log" Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.646642 5107 generic.go:358] "Generic (PLEG): container finished" podID="32f27b64-f857-4de1-a94b-69e9f6cde779" containerID="f9c2b220161259c590aa5aeeee426e6580973936422b6e811727eb6b7cdd94d3" exitCode=1 Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.646800 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"32f27b64-f857-4de1-a94b-69e9f6cde779","Type":"ContainerDied","Data":"f9c2b220161259c590aa5aeeee426e6580973936422b6e811727eb6b7cdd94d3"} Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.646845 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"32f27b64-f857-4de1-a94b-69e9f6cde779","Type":"ContainerDied","Data":"3e7b964b95c5455d6228ce2d5f8b5916191f1939e26b6add495fdaca9438e0ca"} Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.646866 5107 scope.go:117] "RemoveContainer" containerID="f9c2b220161259c590aa5aeeee426e6580973936422b6e811727eb6b7cdd94d3" Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.646812 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.674930 5107 scope.go:117] "RemoveContainer" containerID="f9c2b220161259c590aa5aeeee426e6580973936422b6e811727eb6b7cdd94d3" Dec 09 15:10:30 crc kubenswrapper[5107]: E1209 15:10:30.675488 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9c2b220161259c590aa5aeeee426e6580973936422b6e811727eb6b7cdd94d3\": container with ID starting with f9c2b220161259c590aa5aeeee426e6580973936422b6e811727eb6b7cdd94d3 not found: ID does not exist" containerID="f9c2b220161259c590aa5aeeee426e6580973936422b6e811727eb6b7cdd94d3" Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.675546 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9c2b220161259c590aa5aeeee426e6580973936422b6e811727eb6b7cdd94d3"} err="failed to get container status \"f9c2b220161259c590aa5aeeee426e6580973936422b6e811727eb6b7cdd94d3\": rpc error: code = NotFound desc = could not find container \"f9c2b220161259c590aa5aeeee426e6580973936422b6e811727eb6b7cdd94d3\": container with ID starting with f9c2b220161259c590aa5aeeee426e6580973936422b6e811727eb6b7cdd94d3 not found: ID does not exist" Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.695639 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.701101 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 09 15:10:30 crc kubenswrapper[5107]: I1209 15:10:30.826355 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32f27b64-f857-4de1-a94b-69e9f6cde779" path="/var/lib/kubelet/pods/32f27b64-f857-4de1-a94b-69e9f6cde779/volumes" Dec 09 15:10:53 crc kubenswrapper[5107]: I1209 15:10:53.192127 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g7sv4_357946f5-b5ee-4739-a2c3-62beb5aedb57/kube-multus/0.log" Dec 09 15:10:53 crc kubenswrapper[5107]: I1209 15:10:53.196380 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g7sv4_357946f5-b5ee-4739-a2c3-62beb5aedb57/kube-multus/0.log" Dec 09 15:10:53 crc kubenswrapper[5107]: I1209 15:10:53.200308 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 09 15:10:53 crc kubenswrapper[5107]: I1209 15:10:53.205501 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 09 15:11:06 crc kubenswrapper[5107]: I1209 15:11:06.361750 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zhv72/must-gather-88fwn"] Dec 09 15:11:06 crc kubenswrapper[5107]: I1209 15:11:06.363251 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="32f27b64-f857-4de1-a94b-69e9f6cde779" containerName="git-clone" Dec 09 15:11:06 crc kubenswrapper[5107]: I1209 15:11:06.363270 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="32f27b64-f857-4de1-a94b-69e9f6cde779" containerName="git-clone" Dec 09 15:11:06 crc kubenswrapper[5107]: I1209 15:11:06.363422 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="32f27b64-f857-4de1-a94b-69e9f6cde779" containerName="git-clone" Dec 09 15:11:06 crc kubenswrapper[5107]: I1209 15:11:06.386304 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-zhv72/must-gather-88fwn"] Dec 09 15:11:06 crc kubenswrapper[5107]: I1209 15:11:06.386460 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zhv72/must-gather-88fwn" Dec 09 15:11:06 crc kubenswrapper[5107]: I1209 15:11:06.390856 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-zhv72\"/\"kube-root-ca.crt\"" Dec 09 15:11:06 crc kubenswrapper[5107]: I1209 15:11:06.393173 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-zhv72\"/\"openshift-service-ca.crt\"" Dec 09 15:11:06 crc kubenswrapper[5107]: I1209 15:11:06.393901 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-zhv72\"/\"default-dockercfg-stm6t\"" Dec 09 15:11:06 crc kubenswrapper[5107]: I1209 15:11:06.523392 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47nwt\" (UniqueName: \"kubernetes.io/projected/5a027a6b-c2ca-43fa-9f85-d5e90add24fa-kube-api-access-47nwt\") pod \"must-gather-88fwn\" (UID: \"5a027a6b-c2ca-43fa-9f85-d5e90add24fa\") " pod="openshift-must-gather-zhv72/must-gather-88fwn" Dec 09 15:11:06 crc kubenswrapper[5107]: I1209 15:11:06.523742 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5a027a6b-c2ca-43fa-9f85-d5e90add24fa-must-gather-output\") pod \"must-gather-88fwn\" (UID: \"5a027a6b-c2ca-43fa-9f85-d5e90add24fa\") " pod="openshift-must-gather-zhv72/must-gather-88fwn" Dec 09 15:11:06 crc kubenswrapper[5107]: I1209 15:11:06.624627 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5a027a6b-c2ca-43fa-9f85-d5e90add24fa-must-gather-output\") pod \"must-gather-88fwn\" (UID: \"5a027a6b-c2ca-43fa-9f85-d5e90add24fa\") " pod="openshift-must-gather-zhv72/must-gather-88fwn" Dec 09 15:11:06 crc kubenswrapper[5107]: I1209 15:11:06.624764 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-47nwt\" (UniqueName: \"kubernetes.io/projected/5a027a6b-c2ca-43fa-9f85-d5e90add24fa-kube-api-access-47nwt\") pod \"must-gather-88fwn\" (UID: \"5a027a6b-c2ca-43fa-9f85-d5e90add24fa\") " pod="openshift-must-gather-zhv72/must-gather-88fwn" Dec 09 15:11:06 crc kubenswrapper[5107]: I1209 15:11:06.625050 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5a027a6b-c2ca-43fa-9f85-d5e90add24fa-must-gather-output\") pod \"must-gather-88fwn\" (UID: \"5a027a6b-c2ca-43fa-9f85-d5e90add24fa\") " pod="openshift-must-gather-zhv72/must-gather-88fwn" Dec 09 15:11:06 crc kubenswrapper[5107]: I1209 15:11:06.644164 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-47nwt\" (UniqueName: \"kubernetes.io/projected/5a027a6b-c2ca-43fa-9f85-d5e90add24fa-kube-api-access-47nwt\") pod \"must-gather-88fwn\" (UID: \"5a027a6b-c2ca-43fa-9f85-d5e90add24fa\") " pod="openshift-must-gather-zhv72/must-gather-88fwn" Dec 09 15:11:06 crc kubenswrapper[5107]: I1209 15:11:06.700744 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zhv72/must-gather-88fwn" Dec 09 15:11:07 crc kubenswrapper[5107]: I1209 15:11:07.102730 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-zhv72/must-gather-88fwn"] Dec 09 15:11:07 crc kubenswrapper[5107]: I1209 15:11:07.887238 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zhv72/must-gather-88fwn" event={"ID":"5a027a6b-c2ca-43fa-9f85-d5e90add24fa","Type":"ContainerStarted","Data":"73a4552c0af465fc604a89cb3c1fffc69efc8ead48019449f2cdcc38151bce43"} Dec 09 15:11:16 crc kubenswrapper[5107]: I1209 15:11:16.963502 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zhv72/must-gather-88fwn" event={"ID":"5a027a6b-c2ca-43fa-9f85-d5e90add24fa","Type":"ContainerStarted","Data":"07ef6c6c53a0ebf74a2487a96b393dcaa3e16d31f70183a03367a2d1ccbe216d"} Dec 09 15:11:16 crc kubenswrapper[5107]: I1209 15:11:16.964140 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zhv72/must-gather-88fwn" event={"ID":"5a027a6b-c2ca-43fa-9f85-d5e90add24fa","Type":"ContainerStarted","Data":"8438c90a1cc578ba7bb31ce1c4575048afce88afb56c7edd58a596e1f63059e5"} Dec 09 15:11:16 crc kubenswrapper[5107]: I1209 15:11:16.983928 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-zhv72/must-gather-88fwn" podStartSLOduration=2.025763967 podStartE2EDuration="10.983909348s" podCreationTimestamp="2025-12-09 15:11:06 +0000 UTC" firstStartedPulling="2025-12-09 15:11:07.112942272 +0000 UTC m=+914.836647171" lastFinishedPulling="2025-12-09 15:11:16.071087663 +0000 UTC m=+923.794792552" observedRunningTime="2025-12-09 15:11:16.980556896 +0000 UTC m=+924.704261785" watchObservedRunningTime="2025-12-09 15:11:16.983909348 +0000 UTC m=+924.707614237" Dec 09 15:11:26 crc kubenswrapper[5107]: I1209 15:11:26.364251 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45094: no serving certificate available for the kubelet" Dec 09 15:11:56 crc kubenswrapper[5107]: I1209 15:11:56.410495 5107 ???:1] "http: TLS handshake error from 192.168.126.11:35492: no serving certificate available for the kubelet" Dec 09 15:11:56 crc kubenswrapper[5107]: I1209 15:11:56.621954 5107 ???:1] "http: TLS handshake error from 192.168.126.11:35504: no serving certificate available for the kubelet" Dec 09 15:11:56 crc kubenswrapper[5107]: I1209 15:11:56.644596 5107 ???:1] "http: TLS handshake error from 192.168.126.11:35518: no serving certificate available for the kubelet" Dec 09 15:12:07 crc kubenswrapper[5107]: I1209 15:12:07.863953 5107 ???:1] "http: TLS handshake error from 192.168.126.11:43436: no serving certificate available for the kubelet" Dec 09 15:12:07 crc kubenswrapper[5107]: I1209 15:12:07.979546 5107 ???:1] "http: TLS handshake error from 192.168.126.11:43450: no serving certificate available for the kubelet" Dec 09 15:12:08 crc kubenswrapper[5107]: I1209 15:12:08.046528 5107 ???:1] "http: TLS handshake error from 192.168.126.11:43462: no serving certificate available for the kubelet" Dec 09 15:12:14 crc kubenswrapper[5107]: I1209 15:12:14.155707 5107 patch_prober.go:28] interesting pod/machine-config-daemon-9jq8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:12:14 crc kubenswrapper[5107]: I1209 15:12:14.156623 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" podUID="902902bc-6dc6-4c5f-8e1b-9399b7c813c7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:12:24 crc kubenswrapper[5107]: I1209 15:12:24.482305 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45190: no serving certificate available for the kubelet" Dec 09 15:12:24 crc kubenswrapper[5107]: I1209 15:12:24.668205 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45192: no serving certificate available for the kubelet" Dec 09 15:12:24 crc kubenswrapper[5107]: I1209 15:12:24.704651 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45208: no serving certificate available for the kubelet" Dec 09 15:12:24 crc kubenswrapper[5107]: I1209 15:12:24.714603 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45224: no serving certificate available for the kubelet" Dec 09 15:12:24 crc kubenswrapper[5107]: I1209 15:12:24.974605 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45234: no serving certificate available for the kubelet" Dec 09 15:12:24 crc kubenswrapper[5107]: I1209 15:12:24.989150 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45250: no serving certificate available for the kubelet" Dec 09 15:12:24 crc kubenswrapper[5107]: I1209 15:12:24.990419 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45260: no serving certificate available for the kubelet" Dec 09 15:12:25 crc kubenswrapper[5107]: I1209 15:12:25.155739 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45274: no serving certificate available for the kubelet" Dec 09 15:12:25 crc kubenswrapper[5107]: I1209 15:12:25.498118 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45294: no serving certificate available for the kubelet" Dec 09 15:12:25 crc kubenswrapper[5107]: I1209 15:12:25.498559 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45278: no serving certificate available for the kubelet" Dec 09 15:12:25 crc kubenswrapper[5107]: I1209 15:12:25.508276 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45298: no serving certificate available for the kubelet" Dec 09 15:12:25 crc kubenswrapper[5107]: I1209 15:12:25.725461 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45310: no serving certificate available for the kubelet" Dec 09 15:12:25 crc kubenswrapper[5107]: I1209 15:12:25.740077 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45312: no serving certificate available for the kubelet" Dec 09 15:12:25 crc kubenswrapper[5107]: I1209 15:12:25.746075 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45322: no serving certificate available for the kubelet" Dec 09 15:12:25 crc kubenswrapper[5107]: I1209 15:12:25.946307 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45324: no serving certificate available for the kubelet" Dec 09 15:12:26 crc kubenswrapper[5107]: I1209 15:12:26.108539 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45326: no serving certificate available for the kubelet" Dec 09 15:12:26 crc kubenswrapper[5107]: I1209 15:12:26.109858 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45328: no serving certificate available for the kubelet" Dec 09 15:12:26 crc kubenswrapper[5107]: I1209 15:12:26.134147 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45338: no serving certificate available for the kubelet" Dec 09 15:12:26 crc kubenswrapper[5107]: I1209 15:12:26.327038 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45350: no serving certificate available for the kubelet" Dec 09 15:12:26 crc kubenswrapper[5107]: I1209 15:12:26.328151 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45360: no serving certificate available for the kubelet" Dec 09 15:12:26 crc kubenswrapper[5107]: I1209 15:12:26.365102 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45366: no serving certificate available for the kubelet" Dec 09 15:12:26 crc kubenswrapper[5107]: I1209 15:12:26.496853 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45374: no serving certificate available for the kubelet" Dec 09 15:12:26 crc kubenswrapper[5107]: I1209 15:12:26.671751 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45384: no serving certificate available for the kubelet" Dec 09 15:12:26 crc kubenswrapper[5107]: I1209 15:12:26.687693 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45388: no serving certificate available for the kubelet" Dec 09 15:12:26 crc kubenswrapper[5107]: I1209 15:12:26.712245 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45398: no serving certificate available for the kubelet" Dec 09 15:12:26 crc kubenswrapper[5107]: I1209 15:12:26.922110 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45414: no serving certificate available for the kubelet" Dec 09 15:12:26 crc kubenswrapper[5107]: I1209 15:12:26.942181 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45416: no serving certificate available for the kubelet" Dec 09 15:12:26 crc kubenswrapper[5107]: I1209 15:12:26.953447 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45426: no serving certificate available for the kubelet" Dec 09 15:12:27 crc kubenswrapper[5107]: I1209 15:12:27.134981 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45428: no serving certificate available for the kubelet" Dec 09 15:12:27 crc kubenswrapper[5107]: I1209 15:12:27.258280 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45440: no serving certificate available for the kubelet" Dec 09 15:12:27 crc kubenswrapper[5107]: I1209 15:12:27.294468 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45456: no serving certificate available for the kubelet" Dec 09 15:12:27 crc kubenswrapper[5107]: I1209 15:12:27.311165 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45472: no serving certificate available for the kubelet" Dec 09 15:12:27 crc kubenswrapper[5107]: I1209 15:12:27.472558 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45486: no serving certificate available for the kubelet" Dec 09 15:12:27 crc kubenswrapper[5107]: I1209 15:12:27.488781 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45492: no serving certificate available for the kubelet" Dec 09 15:12:27 crc kubenswrapper[5107]: I1209 15:12:27.488818 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45508: no serving certificate available for the kubelet" Dec 09 15:12:27 crc kubenswrapper[5107]: I1209 15:12:27.538754 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45516: no serving certificate available for the kubelet" Dec 09 15:12:27 crc kubenswrapper[5107]: I1209 15:12:27.706445 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45532: no serving certificate available for the kubelet" Dec 09 15:12:27 crc kubenswrapper[5107]: I1209 15:12:27.729541 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45548: no serving certificate available for the kubelet" Dec 09 15:12:27 crc kubenswrapper[5107]: I1209 15:12:27.734592 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45550: no serving certificate available for the kubelet" Dec 09 15:12:27 crc kubenswrapper[5107]: I1209 15:12:27.909653 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45562: no serving certificate available for the kubelet" Dec 09 15:12:27 crc kubenswrapper[5107]: I1209 15:12:27.942188 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45574: no serving certificate available for the kubelet" Dec 09 15:12:27 crc kubenswrapper[5107]: I1209 15:12:27.947352 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45586: no serving certificate available for the kubelet" Dec 09 15:12:27 crc kubenswrapper[5107]: I1209 15:12:27.990576 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45598: no serving certificate available for the kubelet" Dec 09 15:12:28 crc kubenswrapper[5107]: I1209 15:12:28.182067 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45614: no serving certificate available for the kubelet" Dec 09 15:12:28 crc kubenswrapper[5107]: I1209 15:12:28.345277 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45620: no serving certificate available for the kubelet" Dec 09 15:12:28 crc kubenswrapper[5107]: I1209 15:12:28.363597 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45628: no serving certificate available for the kubelet" Dec 09 15:12:28 crc kubenswrapper[5107]: I1209 15:12:28.365360 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45638: no serving certificate available for the kubelet" Dec 09 15:12:28 crc kubenswrapper[5107]: I1209 15:12:28.556307 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45652: no serving certificate available for the kubelet" Dec 09 15:12:28 crc kubenswrapper[5107]: I1209 15:12:28.567824 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45654: no serving certificate available for the kubelet" Dec 09 15:12:28 crc kubenswrapper[5107]: I1209 15:12:28.585625 5107 ???:1] "http: TLS handshake error from 192.168.126.11:45670: no serving certificate available for the kubelet" Dec 09 15:12:39 crc kubenswrapper[5107]: I1209 15:12:39.882636 5107 ???:1] "http: TLS handshake error from 192.168.126.11:51566: no serving certificate available for the kubelet" Dec 09 15:12:40 crc kubenswrapper[5107]: I1209 15:12:40.065198 5107 ???:1] "http: TLS handshake error from 192.168.126.11:51570: no serving certificate available for the kubelet" Dec 09 15:12:40 crc kubenswrapper[5107]: I1209 15:12:40.121909 5107 ???:1] "http: TLS handshake error from 192.168.126.11:51586: no serving certificate available for the kubelet" Dec 09 15:12:40 crc kubenswrapper[5107]: I1209 15:12:40.264679 5107 ???:1] "http: TLS handshake error from 192.168.126.11:51598: no serving certificate available for the kubelet" Dec 09 15:12:40 crc kubenswrapper[5107]: I1209 15:12:40.342624 5107 ???:1] "http: TLS handshake error from 192.168.126.11:51606: no serving certificate available for the kubelet" Dec 09 15:12:44 crc kubenswrapper[5107]: I1209 15:12:44.154717 5107 patch_prober.go:28] interesting pod/machine-config-daemon-9jq8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:12:44 crc kubenswrapper[5107]: I1209 15:12:44.155071 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" podUID="902902bc-6dc6-4c5f-8e1b-9399b7c813c7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:12:52 crc kubenswrapper[5107]: E1209 15:12:52.826374 5107 certificate_manager.go:613] "Certificate request was not signed" err="timed out waiting for the condition" logger="kubernetes.io/kubelet-serving.UnhandledError" Dec 09 15:12:54 crc kubenswrapper[5107]: I1209 15:12:54.830064 5107 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Dec 09 15:12:54 crc kubenswrapper[5107]: I1209 15:12:54.856808 5107 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 09 15:12:54 crc kubenswrapper[5107]: I1209 15:12:54.875061 5107 ???:1] "http: TLS handshake error from 192.168.126.11:54340: no serving certificate available for the kubelet" Dec 09 15:12:54 crc kubenswrapper[5107]: I1209 15:12:54.901801 5107 ???:1] "http: TLS handshake error from 192.168.126.11:54352: no serving certificate available for the kubelet" Dec 09 15:12:54 crc kubenswrapper[5107]: I1209 15:12:54.931230 5107 ???:1] "http: TLS handshake error from 192.168.126.11:54354: no serving certificate available for the kubelet" Dec 09 15:12:54 crc kubenswrapper[5107]: I1209 15:12:54.972514 5107 ???:1] "http: TLS handshake error from 192.168.126.11:54366: no serving certificate available for the kubelet" Dec 09 15:12:55 crc kubenswrapper[5107]: I1209 15:12:55.033045 5107 ???:1] "http: TLS handshake error from 192.168.126.11:54382: no serving certificate available for the kubelet" Dec 09 15:12:55 crc kubenswrapper[5107]: I1209 15:12:55.135040 5107 ???:1] "http: TLS handshake error from 192.168.126.11:54384: no serving certificate available for the kubelet" Dec 09 15:12:55 crc kubenswrapper[5107]: I1209 15:12:55.314058 5107 ???:1] "http: TLS handshake error from 192.168.126.11:54388: no serving certificate available for the kubelet" Dec 09 15:12:55 crc kubenswrapper[5107]: I1209 15:12:55.655758 5107 ???:1] "http: TLS handshake error from 192.168.126.11:54394: no serving certificate available for the kubelet" Dec 09 15:12:56 crc kubenswrapper[5107]: I1209 15:12:56.326370 5107 ???:1] "http: TLS handshake error from 192.168.126.11:54402: no serving certificate available for the kubelet" Dec 09 15:12:57 crc kubenswrapper[5107]: I1209 15:12:57.632614 5107 ???:1] "http: TLS handshake error from 192.168.126.11:54412: no serving certificate available for the kubelet" Dec 09 15:13:00 crc kubenswrapper[5107]: I1209 15:13:00.228853 5107 ???:1] "http: TLS handshake error from 192.168.126.11:54420: no serving certificate available for the kubelet" Dec 09 15:13:05 crc kubenswrapper[5107]: I1209 15:13:05.376363 5107 ???:1] "http: TLS handshake error from 192.168.126.11:59008: no serving certificate available for the kubelet" Dec 09 15:13:14 crc kubenswrapper[5107]: I1209 15:13:14.154558 5107 patch_prober.go:28] interesting pod/machine-config-daemon-9jq8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 15:13:14 crc kubenswrapper[5107]: I1209 15:13:14.155249 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" podUID="902902bc-6dc6-4c5f-8e1b-9399b7c813c7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 15:13:14 crc kubenswrapper[5107]: I1209 15:13:14.155321 5107 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" Dec 09 15:13:14 crc kubenswrapper[5107]: I1209 15:13:14.156418 5107 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"803bbf6bec51d832e2da8a834695540d5db512be8f49d6c1ef0e6c6c554fc66e"} pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 15:13:14 crc kubenswrapper[5107]: I1209 15:13:14.156712 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" podUID="902902bc-6dc6-4c5f-8e1b-9399b7c813c7" containerName="machine-config-daemon" containerID="cri-o://803bbf6bec51d832e2da8a834695540d5db512be8f49d6c1ef0e6c6c554fc66e" gracePeriod=600 Dec 09 15:13:14 crc kubenswrapper[5107]: I1209 15:13:14.300160 5107 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 15:13:14 crc kubenswrapper[5107]: I1209 15:13:14.846883 5107 generic.go:358] "Generic (PLEG): container finished" podID="902902bc-6dc6-4c5f-8e1b-9399b7c813c7" containerID="803bbf6bec51d832e2da8a834695540d5db512be8f49d6c1ef0e6c6c554fc66e" exitCode=0 Dec 09 15:13:14 crc kubenswrapper[5107]: I1209 15:13:14.846985 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" event={"ID":"902902bc-6dc6-4c5f-8e1b-9399b7c813c7","Type":"ContainerDied","Data":"803bbf6bec51d832e2da8a834695540d5db512be8f49d6c1ef0e6c6c554fc66e"} Dec 09 15:13:14 crc kubenswrapper[5107]: I1209 15:13:14.847825 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9jq8t" event={"ID":"902902bc-6dc6-4c5f-8e1b-9399b7c813c7","Type":"ContainerStarted","Data":"6cabb10ebd0d5fbb1fd16feae6ec5d7d411a7aef2dad5d7c30eedd5c1187a7b7"} Dec 09 15:13:14 crc kubenswrapper[5107]: I1209 15:13:14.847900 5107 scope.go:117] "RemoveContainer" containerID="be297312e5ca9ef320955bd5db7e8e291d1e5bad441d948b03d47094da2e57b8" Dec 09 15:13:15 crc kubenswrapper[5107]: I1209 15:13:15.642093 5107 ???:1] "http: TLS handshake error from 192.168.126.11:49870: no serving certificate available for the kubelet" Dec 09 15:13:21 crc kubenswrapper[5107]: I1209 15:13:21.899050 5107 generic.go:358] "Generic (PLEG): container finished" podID="5a027a6b-c2ca-43fa-9f85-d5e90add24fa" containerID="8438c90a1cc578ba7bb31ce1c4575048afce88afb56c7edd58a596e1f63059e5" exitCode=0 Dec 09 15:13:21 crc kubenswrapper[5107]: I1209 15:13:21.899145 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zhv72/must-gather-88fwn" event={"ID":"5a027a6b-c2ca-43fa-9f85-d5e90add24fa","Type":"ContainerDied","Data":"8438c90a1cc578ba7bb31ce1c4575048afce88afb56c7edd58a596e1f63059e5"} Dec 09 15:13:21 crc kubenswrapper[5107]: I1209 15:13:21.899993 5107 scope.go:117] "RemoveContainer" containerID="8438c90a1cc578ba7bb31ce1c4575048afce88afb56c7edd58a596e1f63059e5" Dec 09 15:13:26 crc kubenswrapper[5107]: I1209 15:13:26.380636 5107 ???:1] "http: TLS handshake error from 192.168.126.11:42978: no serving certificate available for the kubelet" Dec 09 15:13:26 crc kubenswrapper[5107]: I1209 15:13:26.515090 5107 ???:1] "http: TLS handshake error from 192.168.126.11:42982: no serving certificate available for the kubelet" Dec 09 15:13:26 crc kubenswrapper[5107]: I1209 15:13:26.527328 5107 ???:1] "http: TLS handshake error from 192.168.126.11:42986: no serving certificate available for the kubelet" Dec 09 15:13:26 crc kubenswrapper[5107]: I1209 15:13:26.553265 5107 ???:1] "http: TLS handshake error from 192.168.126.11:42990: no serving certificate available for the kubelet" Dec 09 15:13:26 crc kubenswrapper[5107]: I1209 15:13:26.564047 5107 ???:1] "http: TLS handshake error from 192.168.126.11:42998: no serving certificate available for the kubelet" Dec 09 15:13:26 crc kubenswrapper[5107]: I1209 15:13:26.577361 5107 ???:1] "http: TLS handshake error from 192.168.126.11:43006: no serving certificate available for the kubelet" Dec 09 15:13:26 crc kubenswrapper[5107]: I1209 15:13:26.586688 5107 ???:1] "http: TLS handshake error from 192.168.126.11:43008: no serving certificate available for the kubelet" Dec 09 15:13:26 crc kubenswrapper[5107]: I1209 15:13:26.599242 5107 ???:1] "http: TLS handshake error from 192.168.126.11:43022: no serving certificate available for the kubelet" Dec 09 15:13:26 crc kubenswrapper[5107]: I1209 15:13:26.610021 5107 ???:1] "http: TLS handshake error from 192.168.126.11:43028: no serving certificate available for the kubelet" Dec 09 15:13:26 crc kubenswrapper[5107]: I1209 15:13:26.771160 5107 ???:1] "http: TLS handshake error from 192.168.126.11:43034: no serving certificate available for the kubelet" Dec 09 15:13:26 crc kubenswrapper[5107]: I1209 15:13:26.783601 5107 ???:1] "http: TLS handshake error from 192.168.126.11:43038: no serving certificate available for the kubelet" Dec 09 15:13:26 crc kubenswrapper[5107]: I1209 15:13:26.806123 5107 ???:1] "http: TLS handshake error from 192.168.126.11:43050: no serving certificate available for the kubelet" Dec 09 15:13:26 crc kubenswrapper[5107]: I1209 15:13:26.817600 5107 ???:1] "http: TLS handshake error from 192.168.126.11:43060: no serving certificate available for the kubelet" Dec 09 15:13:26 crc kubenswrapper[5107]: I1209 15:13:26.831282 5107 ???:1] "http: TLS handshake error from 192.168.126.11:43072: no serving certificate available for the kubelet" Dec 09 15:13:26 crc kubenswrapper[5107]: I1209 15:13:26.848885 5107 ???:1] "http: TLS handshake error from 192.168.126.11:43078: no serving certificate available for the kubelet" Dec 09 15:13:26 crc kubenswrapper[5107]: I1209 15:13:26.862754 5107 ???:1] "http: TLS handshake error from 192.168.126.11:43092: no serving certificate available for the kubelet" Dec 09 15:13:26 crc kubenswrapper[5107]: I1209 15:13:26.872646 5107 ???:1] "http: TLS handshake error from 192.168.126.11:43096: no serving certificate available for the kubelet" Dec 09 15:13:31 crc kubenswrapper[5107]: I1209 15:13:31.908615 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zhv72/must-gather-88fwn"] Dec 09 15:13:31 crc kubenswrapper[5107]: I1209 15:13:31.909550 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-must-gather-zhv72/must-gather-88fwn" podUID="5a027a6b-c2ca-43fa-9f85-d5e90add24fa" containerName="copy" containerID="cri-o://07ef6c6c53a0ebf74a2487a96b393dcaa3e16d31f70183a03367a2d1ccbe216d" gracePeriod=2 Dec 09 15:13:31 crc kubenswrapper[5107]: I1209 15:13:31.912017 5107 status_manager.go:895] "Failed to get status for pod" podUID="5a027a6b-c2ca-43fa-9f85-d5e90add24fa" pod="openshift-must-gather-zhv72/must-gather-88fwn" err="pods \"must-gather-88fwn\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-zhv72\": no relationship found between node 'crc' and this object" Dec 09 15:13:31 crc kubenswrapper[5107]: I1209 15:13:31.913667 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zhv72/must-gather-88fwn"] Dec 09 15:13:32 crc kubenswrapper[5107]: I1209 15:13:32.369662 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zhv72_must-gather-88fwn_5a027a6b-c2ca-43fa-9f85-d5e90add24fa/copy/0.log" Dec 09 15:13:32 crc kubenswrapper[5107]: I1209 15:13:32.371371 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zhv72/must-gather-88fwn" Dec 09 15:13:32 crc kubenswrapper[5107]: I1209 15:13:32.374101 5107 status_manager.go:895] "Failed to get status for pod" podUID="5a027a6b-c2ca-43fa-9f85-d5e90add24fa" pod="openshift-must-gather-zhv72/must-gather-88fwn" err="pods \"must-gather-88fwn\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-zhv72\": no relationship found between node 'crc' and this object" Dec 09 15:13:32 crc kubenswrapper[5107]: I1209 15:13:32.489244 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47nwt\" (UniqueName: \"kubernetes.io/projected/5a027a6b-c2ca-43fa-9f85-d5e90add24fa-kube-api-access-47nwt\") pod \"5a027a6b-c2ca-43fa-9f85-d5e90add24fa\" (UID: \"5a027a6b-c2ca-43fa-9f85-d5e90add24fa\") " Dec 09 15:13:32 crc kubenswrapper[5107]: I1209 15:13:32.489375 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5a027a6b-c2ca-43fa-9f85-d5e90add24fa-must-gather-output\") pod \"5a027a6b-c2ca-43fa-9f85-d5e90add24fa\" (UID: \"5a027a6b-c2ca-43fa-9f85-d5e90add24fa\") " Dec 09 15:13:32 crc kubenswrapper[5107]: I1209 15:13:32.495732 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a027a6b-c2ca-43fa-9f85-d5e90add24fa-kube-api-access-47nwt" (OuterVolumeSpecName: "kube-api-access-47nwt") pod "5a027a6b-c2ca-43fa-9f85-d5e90add24fa" (UID: "5a027a6b-c2ca-43fa-9f85-d5e90add24fa"). InnerVolumeSpecName "kube-api-access-47nwt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 15:13:32 crc kubenswrapper[5107]: I1209 15:13:32.530005 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a027a6b-c2ca-43fa-9f85-d5e90add24fa-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "5a027a6b-c2ca-43fa-9f85-d5e90add24fa" (UID: "5a027a6b-c2ca-43fa-9f85-d5e90add24fa"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 15:13:32 crc kubenswrapper[5107]: I1209 15:13:32.591516 5107 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5a027a6b-c2ca-43fa-9f85-d5e90add24fa-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 09 15:13:32 crc kubenswrapper[5107]: I1209 15:13:32.591569 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-47nwt\" (UniqueName: \"kubernetes.io/projected/5a027a6b-c2ca-43fa-9f85-d5e90add24fa-kube-api-access-47nwt\") on node \"crc\" DevicePath \"\"" Dec 09 15:13:32 crc kubenswrapper[5107]: I1209 15:13:32.823817 5107 status_manager.go:895] "Failed to get status for pod" podUID="5a027a6b-c2ca-43fa-9f85-d5e90add24fa" pod="openshift-must-gather-zhv72/must-gather-88fwn" err="pods \"must-gather-88fwn\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-zhv72\": no relationship found between node 'crc' and this object" Dec 09 15:13:32 crc kubenswrapper[5107]: I1209 15:13:32.826109 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a027a6b-c2ca-43fa-9f85-d5e90add24fa" path="/var/lib/kubelet/pods/5a027a6b-c2ca-43fa-9f85-d5e90add24fa/volumes" Dec 09 15:13:32 crc kubenswrapper[5107]: I1209 15:13:32.972263 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zhv72_must-gather-88fwn_5a027a6b-c2ca-43fa-9f85-d5e90add24fa/copy/0.log" Dec 09 15:13:32 crc kubenswrapper[5107]: I1209 15:13:32.972724 5107 generic.go:358] "Generic (PLEG): container finished" podID="5a027a6b-c2ca-43fa-9f85-d5e90add24fa" containerID="07ef6c6c53a0ebf74a2487a96b393dcaa3e16d31f70183a03367a2d1ccbe216d" exitCode=143 Dec 09 15:13:32 crc kubenswrapper[5107]: I1209 15:13:32.972774 5107 scope.go:117] "RemoveContainer" containerID="07ef6c6c53a0ebf74a2487a96b393dcaa3e16d31f70183a03367a2d1ccbe216d" Dec 09 15:13:32 crc kubenswrapper[5107]: I1209 15:13:32.972881 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zhv72/must-gather-88fwn" Dec 09 15:13:32 crc kubenswrapper[5107]: I1209 15:13:32.993681 5107 scope.go:117] "RemoveContainer" containerID="8438c90a1cc578ba7bb31ce1c4575048afce88afb56c7edd58a596e1f63059e5" Dec 09 15:13:33 crc kubenswrapper[5107]: I1209 15:13:33.080220 5107 scope.go:117] "RemoveContainer" containerID="07ef6c6c53a0ebf74a2487a96b393dcaa3e16d31f70183a03367a2d1ccbe216d" Dec 09 15:13:33 crc kubenswrapper[5107]: E1209 15:13:33.080851 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07ef6c6c53a0ebf74a2487a96b393dcaa3e16d31f70183a03367a2d1ccbe216d\": container with ID starting with 07ef6c6c53a0ebf74a2487a96b393dcaa3e16d31f70183a03367a2d1ccbe216d not found: ID does not exist" containerID="07ef6c6c53a0ebf74a2487a96b393dcaa3e16d31f70183a03367a2d1ccbe216d" Dec 09 15:13:33 crc kubenswrapper[5107]: I1209 15:13:33.080890 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07ef6c6c53a0ebf74a2487a96b393dcaa3e16d31f70183a03367a2d1ccbe216d"} err="failed to get container status \"07ef6c6c53a0ebf74a2487a96b393dcaa3e16d31f70183a03367a2d1ccbe216d\": rpc error: code = NotFound desc = could not find container \"07ef6c6c53a0ebf74a2487a96b393dcaa3e16d31f70183a03367a2d1ccbe216d\": container with ID starting with 07ef6c6c53a0ebf74a2487a96b393dcaa3e16d31f70183a03367a2d1ccbe216d not found: ID does not exist" Dec 09 15:13:33 crc kubenswrapper[5107]: I1209 15:13:33.080913 5107 scope.go:117] "RemoveContainer" containerID="8438c90a1cc578ba7bb31ce1c4575048afce88afb56c7edd58a596e1f63059e5" Dec 09 15:13:33 crc kubenswrapper[5107]: E1209 15:13:33.081410 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8438c90a1cc578ba7bb31ce1c4575048afce88afb56c7edd58a596e1f63059e5\": container with ID starting with 8438c90a1cc578ba7bb31ce1c4575048afce88afb56c7edd58a596e1f63059e5 not found: ID does not exist" containerID="8438c90a1cc578ba7bb31ce1c4575048afce88afb56c7edd58a596e1f63059e5" Dec 09 15:13:33 crc kubenswrapper[5107]: I1209 15:13:33.081462 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8438c90a1cc578ba7bb31ce1c4575048afce88afb56c7edd58a596e1f63059e5"} err="failed to get container status \"8438c90a1cc578ba7bb31ce1c4575048afce88afb56c7edd58a596e1f63059e5\": rpc error: code = NotFound desc = could not find container \"8438c90a1cc578ba7bb31ce1c4575048afce88afb56c7edd58a596e1f63059e5\": container with ID starting with 8438c90a1cc578ba7bb31ce1c4575048afce88afb56c7edd58a596e1f63059e5 not found: ID does not exist" Dec 09 15:13:36 crc kubenswrapper[5107]: I1209 15:13:36.154501 5107 ???:1] "http: TLS handshake error from 192.168.126.11:53944: no serving certificate available for the kubelet" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515116036300024440 0ustar coreroot‹íÁ  ÷Om7 €7šÞ'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015116036301017356 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015116033577016515 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015116033600015450 5ustar corecore